Test Report: Hyperkit_macOS 19452

                    
                      667295c6870455ef3392c60a87bf7f5fdc211f00:2024-08-15:35803
                    
                

Test fail (17/327)

x
+
TestOffline (146.61s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-363000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p offline-docker-363000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit : exit status 80 (2m21.210117856s)

                                                
                                                
-- stdout --
	* [offline-docker-363000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-977/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-977/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "offline-docker-363000" primary control-plane node in "offline-docker-363000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "offline-docker-363000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 16:56:08.895470    5609 out.go:345] Setting OutFile to fd 1 ...
	I0815 16:56:08.895764    5609 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:56:08.895769    5609 out.go:358] Setting ErrFile to fd 2...
	I0815 16:56:08.895773    5609 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:56:08.895957    5609 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-977/.minikube/bin
	I0815 16:56:08.898469    5609 out.go:352] Setting JSON to false
	I0815 16:56:08.923597    5609 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":3339,"bootTime":1723762829,"procs":425,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0815 16:56:08.923699    5609 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 16:56:08.982148    5609 out.go:177] * [offline-docker-363000] minikube v1.33.1 on Darwin 14.6.1
	I0815 16:56:09.023233    5609 notify.go:220] Checking for updates...
	I0815 16:56:09.048386    5609 out.go:177]   - MINIKUBE_LOCATION=19452
	I0815 16:56:09.099122    5609 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19452-977/kubeconfig
	I0815 16:56:09.120333    5609 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0815 16:56:09.145111    5609 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 16:56:09.166282    5609 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-977/.minikube
	I0815 16:56:09.187310    5609 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 16:56:09.208423    5609 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 16:56:09.237213    5609 out.go:177] * Using the hyperkit driver based on user configuration
	I0815 16:56:09.279499    5609 start.go:297] selected driver: hyperkit
	I0815 16:56:09.279527    5609 start.go:901] validating driver "hyperkit" against <nil>
	I0815 16:56:09.279549    5609 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 16:56:09.284251    5609 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 16:56:09.284400    5609 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19452-977/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0815 16:56:09.292898    5609 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0815 16:56:09.296571    5609 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:56:09.296593    5609 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0815 16:56:09.296625    5609 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 16:56:09.296841    5609 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 16:56:09.296891    5609 cni.go:84] Creating CNI manager for ""
	I0815 16:56:09.296905    5609 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0815 16:56:09.296917    5609 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0815 16:56:09.296990    5609 start.go:340] cluster config:
	{Name:offline-docker-363000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-363000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 16:56:09.297073    5609 iso.go:125] acquiring lock: {Name:mkb93fc66b60be15dffa9eb2999a4be8d8d4d218 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 16:56:09.344427    5609 out.go:177] * Starting "offline-docker-363000" primary control-plane node in "offline-docker-363000" cluster
	I0815 16:56:09.386377    5609 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 16:56:09.386415    5609 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19452-977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0815 16:56:09.386432    5609 cache.go:56] Caching tarball of preloaded images
	I0815 16:56:09.386539    5609 preload.go:172] Found /Users/jenkins/minikube-integration/19452-977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0815 16:56:09.386548    5609 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 16:56:09.386830    5609 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/offline-docker-363000/config.json ...
	I0815 16:56:09.386850    5609 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/offline-docker-363000/config.json: {Name:mka4e144267a2fcea063ad01a6e4c8f4dab9a562 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:56:09.387193    5609 start.go:360] acquireMachinesLock for offline-docker-363000: {Name:mkac8034c8a60617271f2754a03dcaf74fc4fef7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 16:56:09.387254    5609 start.go:364] duration metric: took 47.31µs to acquireMachinesLock for "offline-docker-363000"
	I0815 16:56:09.387282    5609 start.go:93] Provisioning new machine with config: &{Name:offline-docker-363000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-363000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 16:56:09.387330    5609 start.go:125] createHost starting for "" (driver="hyperkit")
	I0815 16:56:09.408161    5609 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0815 16:56:09.408302    5609 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:56:09.408339    5609 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:56:09.417071    5609 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:54007
	I0815 16:56:09.417426    5609 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:56:09.417815    5609 main.go:141] libmachine: Using API Version  1
	I0815 16:56:09.417825    5609 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:56:09.418072    5609 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:56:09.418192    5609 main.go:141] libmachine: (offline-docker-363000) Calling .GetMachineName
	I0815 16:56:09.418284    5609 main.go:141] libmachine: (offline-docker-363000) Calling .DriverName
	I0815 16:56:09.418417    5609 start.go:159] libmachine.API.Create for "offline-docker-363000" (driver="hyperkit")
	I0815 16:56:09.418440    5609 client.go:168] LocalClient.Create starting
	I0815 16:56:09.418474    5609 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem
	I0815 16:56:09.418528    5609 main.go:141] libmachine: Decoding PEM data...
	I0815 16:56:09.418544    5609 main.go:141] libmachine: Parsing certificate...
	I0815 16:56:09.418633    5609 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem
	I0815 16:56:09.418672    5609 main.go:141] libmachine: Decoding PEM data...
	I0815 16:56:09.418684    5609 main.go:141] libmachine: Parsing certificate...
	I0815 16:56:09.418698    5609 main.go:141] libmachine: Running pre-create checks...
	I0815 16:56:09.418707    5609 main.go:141] libmachine: (offline-docker-363000) Calling .PreCreateCheck
	I0815 16:56:09.418810    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:56:09.418997    5609 main.go:141] libmachine: (offline-docker-363000) Calling .GetConfigRaw
	I0815 16:56:09.428906    5609 main.go:141] libmachine: Creating machine...
	I0815 16:56:09.428952    5609 main.go:141] libmachine: (offline-docker-363000) Calling .Create
	I0815 16:56:09.429219    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:56:09.429469    5609 main.go:141] libmachine: (offline-docker-363000) DBG | I0815 16:56:09.429187    5630 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19452-977/.minikube
	I0815 16:56:09.429628    5609 main.go:141] libmachine: (offline-docker-363000) Downloading /Users/jenkins/minikube-integration/19452-977/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19452-977/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0815 16:56:09.900093    5609 main.go:141] libmachine: (offline-docker-363000) DBG | I0815 16:56:09.899985    5630 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/offline-docker-363000/id_rsa...
	I0815 16:56:10.223633    5609 main.go:141] libmachine: (offline-docker-363000) DBG | I0815 16:56:10.223535    5630 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/offline-docker-363000/offline-docker-363000.rawdisk...
	I0815 16:56:10.223658    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Writing magic tar header
	I0815 16:56:10.223690    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Writing SSH key tar header
	I0815 16:56:10.224216    5609 main.go:141] libmachine: (offline-docker-363000) DBG | I0815 16:56:10.224173    5630 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19452-977/.minikube/machines/offline-docker-363000 ...
	I0815 16:56:10.729298    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:56:10.729335    5609 main.go:141] libmachine: (offline-docker-363000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/offline-docker-363000/hyperkit.pid
	I0815 16:56:10.729352    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Using UUID f1f429bc-90c1-4cac-b4bb-8f8e1e44278c
	I0815 16:56:10.896997    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Generated MAC 2:45:5:57:14:47
	I0815 16:56:10.897023    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-363000
	I0815 16:56:10.897066    5609 main.go:141] libmachine: (offline-docker-363000) DBG | 2024/08/15 16:56:10 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/offline-docker-363000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f1f429bc-90c1-4cac-b4bb-8f8e1e44278c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00019c630)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/offline-docker-363000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/offline-docker-363000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/offline-docker-363000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"
", process:(*os.Process)(nil)}
	I0815 16:56:10.897120    5609 main.go:141] libmachine: (offline-docker-363000) DBG | 2024/08/15 16:56:10 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/offline-docker-363000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f1f429bc-90c1-4cac-b4bb-8f8e1e44278c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00019c630)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/offline-docker-363000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/offline-docker-363000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/offline-docker-363000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"
", process:(*os.Process)(nil)}
	I0815 16:56:10.897180    5609 main.go:141] libmachine: (offline-docker-363000) DBG | 2024/08/15 16:56:10 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19452-977/.minikube/machines/offline-docker-363000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "f1f429bc-90c1-4cac-b4bb-8f8e1e44278c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/offline-docker-363000/offline-docker-363000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/offline-docker-363000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/offline-docker-363000/tty,log=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/offline-docker-363000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/offline-docker-363000/bzimage,/Users
/jenkins/minikube-integration/19452-977/.minikube/machines/offline-docker-363000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-363000"}
	I0815 16:56:10.897227    5609 main.go:141] libmachine: (offline-docker-363000) DBG | 2024/08/15 16:56:10 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19452-977/.minikube/machines/offline-docker-363000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U f1f429bc-90c1-4cac-b4bb-8f8e1e44278c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/offline-docker-363000/offline-docker-363000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/offline-docker-363000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/offline-docker-363000/tty,log=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/offline-docker-363000/console-ring -f kexec,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/offline-docker-363000/bzimage,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/off
line-docker-363000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-363000"
	I0815 16:56:10.897246    5609 main.go:141] libmachine: (offline-docker-363000) DBG | 2024/08/15 16:56:10 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0815 16:56:10.900473    5609 main.go:141] libmachine: (offline-docker-363000) DBG | 2024/08/15 16:56:10 DEBUG: hyperkit: Pid is 5659
	I0815 16:56:10.900898    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Attempt 0
	I0815 16:56:10.900909    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:56:10.900962    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5659
	I0815 16:56:10.901864    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Searching for 2:45:5:57:14:47 in /var/db/dhcpd_leases ...
	I0815 16:56:10.901947    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0815 16:56:10.901964    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 16:56:10.902001    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 16:56:10.902017    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 16:56:10.902048    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 16:56:10.902085    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 16:56:10.902101    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 16:56:10.902130    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 16:56:10.902147    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 16:56:10.902158    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 16:56:10.902169    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 16:56:10.902177    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 16:56:10.902183    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 16:56:10.902225    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 16:56:10.902246    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:56:10.902299    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:56:10.902320    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 16:56:10.902333    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 16:56:10.902344    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 16:56:10.908225    5609 main.go:141] libmachine: (offline-docker-363000) DBG | 2024/08/15 16:56:10 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0815 16:56:11.037156    5609 main.go:141] libmachine: (offline-docker-363000) DBG | 2024/08/15 16:56:11 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/offline-docker-363000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0815 16:56:11.037772    5609 main.go:141] libmachine: (offline-docker-363000) DBG | 2024/08/15 16:56:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0815 16:56:11.037794    5609 main.go:141] libmachine: (offline-docker-363000) DBG | 2024/08/15 16:56:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0815 16:56:11.037802    5609 main.go:141] libmachine: (offline-docker-363000) DBG | 2024/08/15 16:56:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0815 16:56:11.037809    5609 main.go:141] libmachine: (offline-docker-363000) DBG | 2024/08/15 16:56:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0815 16:56:11.416123    5609 main.go:141] libmachine: (offline-docker-363000) DBG | 2024/08/15 16:56:11 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0815 16:56:11.416140    5609 main.go:141] libmachine: (offline-docker-363000) DBG | 2024/08/15 16:56:11 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0815 16:56:11.531020    5609 main.go:141] libmachine: (offline-docker-363000) DBG | 2024/08/15 16:56:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0815 16:56:11.531051    5609 main.go:141] libmachine: (offline-docker-363000) DBG | 2024/08/15 16:56:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0815 16:56:11.531064    5609 main.go:141] libmachine: (offline-docker-363000) DBG | 2024/08/15 16:56:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0815 16:56:11.531076    5609 main.go:141] libmachine: (offline-docker-363000) DBG | 2024/08/15 16:56:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0815 16:56:11.531868    5609 main.go:141] libmachine: (offline-docker-363000) DBG | 2024/08/15 16:56:11 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0815 16:56:11.531881    5609 main.go:141] libmachine: (offline-docker-363000) DBG | 2024/08/15 16:56:11 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0815 16:56:12.904137    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Attempt 1
	I0815 16:56:12.904153    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:56:12.904194    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5659
	I0815 16:56:12.904954    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Searching for 2:45:5:57:14:47 in /var/db/dhcpd_leases ...
	I0815 16:56:12.905012    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0815 16:56:12.905028    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 16:56:12.905041    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 16:56:12.905059    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 16:56:12.905081    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 16:56:12.905090    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 16:56:12.905100    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 16:56:12.905108    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 16:56:12.905126    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 16:56:12.905139    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 16:56:12.905149    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 16:56:12.905158    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 16:56:12.905167    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 16:56:12.905174    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 16:56:12.905183    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:56:12.905197    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:56:12.905209    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 16:56:12.905217    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 16:56:12.905226    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 16:56:14.905821    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Attempt 2
	I0815 16:56:14.905836    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:56:14.905923    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5659
	I0815 16:56:14.906732    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Searching for 2:45:5:57:14:47 in /var/db/dhcpd_leases ...
	I0815 16:56:14.906796    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0815 16:56:14.906807    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 16:56:14.906836    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 16:56:14.906847    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 16:56:14.906855    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 16:56:14.906864    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 16:56:14.906875    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 16:56:14.906881    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 16:56:14.906887    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 16:56:14.906892    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 16:56:14.906912    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 16:56:14.906927    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 16:56:14.906936    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 16:56:14.906945    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 16:56:14.906958    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:56:14.906969    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:56:14.906976    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 16:56:14.906982    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 16:56:14.906990    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 16:56:16.899595    5609 main.go:141] libmachine: (offline-docker-363000) DBG | 2024/08/15 16:56:16 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0815 16:56:16.899740    5609 main.go:141] libmachine: (offline-docker-363000) DBG | 2024/08/15 16:56:16 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0815 16:56:16.899751    5609 main.go:141] libmachine: (offline-docker-363000) DBG | 2024/08/15 16:56:16 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0815 16:56:16.908743    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Attempt 3
	I0815 16:56:16.908753    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:56:16.908841    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5659
	I0815 16:56:16.909670    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Searching for 2:45:5:57:14:47 in /var/db/dhcpd_leases ...
	I0815 16:56:16.909730    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0815 16:56:16.909740    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 16:56:16.909748    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 16:56:16.909757    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 16:56:16.909779    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 16:56:16.909791    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 16:56:16.909806    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 16:56:16.909814    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 16:56:16.909820    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 16:56:16.909827    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 16:56:16.909849    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 16:56:16.909870    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 16:56:16.909881    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 16:56:16.909891    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 16:56:16.909901    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:56:16.909912    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:56:16.909931    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 16:56:16.909941    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 16:56:16.909950    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 16:56:16.922023    5609 main.go:141] libmachine: (offline-docker-363000) DBG | 2024/08/15 16:56:16 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0815 16:56:18.911112    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Attempt 4
	I0815 16:56:18.911135    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:56:18.911221    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5659
	I0815 16:56:18.911990    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Searching for 2:45:5:57:14:47 in /var/db/dhcpd_leases ...
	I0815 16:56:18.912045    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0815 16:56:18.912061    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 16:56:18.912076    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 16:56:18.912089    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 16:56:18.912104    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 16:56:18.912117    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 16:56:18.912125    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 16:56:18.912134    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 16:56:18.912153    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 16:56:18.912177    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 16:56:18.912186    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 16:56:18.912194    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 16:56:18.912204    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 16:56:18.912211    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 16:56:18.912229    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:56:18.912242    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:56:18.912250    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 16:56:18.912259    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 16:56:18.912273    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 16:56:20.912811    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Attempt 5
	I0815 16:56:20.912825    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:56:20.912945    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5659
	I0815 16:56:20.913767    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Searching for 2:45:5:57:14:47 in /var/db/dhcpd_leases ...
	I0815 16:56:20.913813    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0815 16:56:20.913821    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 16:56:20.913829    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 16:56:20.913836    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 16:56:20.913851    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 16:56:20.913864    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 16:56:20.913873    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 16:56:20.913889    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 16:56:20.913897    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 16:56:20.913904    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 16:56:20.913914    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 16:56:20.913924    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 16:56:20.913930    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 16:56:20.913937    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 16:56:20.913945    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:56:20.913952    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:56:20.913960    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 16:56:20.913968    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 16:56:20.913975    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 16:56:22.916057    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Attempt 6
	I0815 16:56:22.916072    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:56:22.916202    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5659
	I0815 16:56:22.917024    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Searching for 2:45:5:57:14:47 in /var/db/dhcpd_leases ...
	I0815 16:56:22.917073    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0815 16:56:22.917085    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 16:56:22.917110    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 16:56:22.917120    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 16:56:22.917129    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 16:56:22.917137    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 16:56:22.917144    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 16:56:22.917152    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 16:56:22.917164    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 16:56:22.917176    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 16:56:22.917185    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 16:56:22.917205    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 16:56:22.917221    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 16:56:22.917235    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 16:56:22.917242    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:56:22.917252    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:56:22.917265    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 16:56:22.917271    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 16:56:22.917279    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 16:56:24.919274    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Attempt 7
	I0815 16:56:24.919287    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:56:24.919395    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5659
	I0815 16:56:24.920146    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Searching for 2:45:5:57:14:47 in /var/db/dhcpd_leases ...
	I0815 16:56:24.920209    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0815 16:56:24.920220    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 16:56:24.920249    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 16:56:24.920258    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 16:56:24.920265    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 16:56:24.920271    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 16:56:24.920284    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 16:56:24.920300    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 16:56:24.920309    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 16:56:24.920318    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 16:56:24.920326    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 16:56:24.920332    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 16:56:24.920339    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 16:56:24.920345    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 16:56:24.920357    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:56:24.920370    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:56:24.920388    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 16:56:24.920402    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 16:56:24.920411    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 16:56:26.921143    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Attempt 8
	I0815 16:56:26.921155    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:56:26.921249    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5659
	I0815 16:56:26.922074    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Searching for 2:45:5:57:14:47 in /var/db/dhcpd_leases ...
	I0815 16:56:26.922110    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0815 16:56:26.922123    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 16:56:26.922138    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 16:56:26.922147    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 16:56:26.922180    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 16:56:26.922194    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 16:56:26.922205    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 16:56:26.922213    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 16:56:26.922220    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 16:56:26.922228    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 16:56:26.922235    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 16:56:26.922244    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 16:56:26.922259    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 16:56:26.922281    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 16:56:26.922289    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:56:26.922297    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:56:26.922314    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 16:56:26.922344    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 16:56:26.922376    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 16:56:28.923565    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Attempt 9
	I0815 16:56:28.923576    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:56:28.923648    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5659
	I0815 16:56:28.924401    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Searching for 2:45:5:57:14:47 in /var/db/dhcpd_leases ...
	I0815 16:56:28.924454    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0815 16:56:28.924469    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 16:56:28.924481    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 16:56:28.924488    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 16:56:28.924496    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 16:56:28.924503    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 16:56:28.924517    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 16:56:28.924531    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 16:56:28.924539    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 16:56:28.924556    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 16:56:28.924571    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 16:56:28.924584    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 16:56:28.924604    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 16:56:28.924616    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 16:56:28.924631    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:56:28.924638    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:56:28.924645    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 16:56:28.924652    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 16:56:28.924660    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 16:56:30.926077    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Attempt 10
	I0815 16:56:30.926093    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:56:30.926142    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5659
	I0815 16:56:30.926948    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Searching for 2:45:5:57:14:47 in /var/db/dhcpd_leases ...
	I0815 16:56:30.926974    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0815 16:56:30.926982    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 16:56:30.927001    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 16:56:30.927010    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 16:56:30.927016    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 16:56:30.927022    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 16:56:30.927031    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 16:56:30.927036    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 16:56:30.927044    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 16:56:30.927058    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 16:56:30.927065    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 16:56:30.927071    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 16:56:30.927077    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 16:56:30.927085    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 16:56:30.927094    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:56:30.927101    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:56:30.927109    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 16:56:30.927116    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 16:56:30.927122    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 16:56:32.928713    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Attempt 11
	I0815 16:56:32.928727    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:56:32.928768    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5659
	I0815 16:56:32.929534    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Searching for 2:45:5:57:14:47 in /var/db/dhcpd_leases ...
	I0815 16:56:32.929585    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0815 16:56:32.929599    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 16:56:32.929608    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 16:56:32.929626    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 16:56:32.929641    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 16:56:32.929655    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 16:56:32.929663    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 16:56:32.929672    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 16:56:32.929679    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 16:56:32.929688    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 16:56:32.929703    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 16:56:32.929711    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 16:56:32.929733    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 16:56:32.929744    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 16:56:32.929754    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:56:32.929760    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:56:32.929767    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 16:56:32.929772    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 16:56:32.929778    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 16:56:34.931811    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Attempt 12
	I0815 16:56:34.931826    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:56:34.931877    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5659
	I0815 16:56:34.932699    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Searching for 2:45:5:57:14:47 in /var/db/dhcpd_leases ...
	I0815 16:56:34.932737    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0815 16:56:34.932745    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 16:56:34.932757    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 16:56:34.932764    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 16:56:34.932771    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 16:56:34.932777    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 16:56:34.932788    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 16:56:34.932794    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 16:56:34.932801    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 16:56:34.932809    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 16:56:34.932826    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 16:56:34.932837    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 16:56:34.932846    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 16:56:34.932854    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 16:56:34.932862    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:56:34.932869    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:56:34.932876    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 16:56:34.932884    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 16:56:34.932893    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 16:56:36.934951    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Attempt 13
	I0815 16:56:36.934962    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:56:36.935101    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5659
	I0815 16:56:36.935875    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Searching for 2:45:5:57:14:47 in /var/db/dhcpd_leases ...
	I0815 16:56:36.935931    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0815 16:56:36.935940    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 16:56:36.935954    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 16:56:36.935965    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 16:56:36.935972    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 16:56:36.935978    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 16:56:36.935998    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 16:56:36.936012    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 16:56:36.936020    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 16:56:36.936029    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 16:56:36.936036    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 16:56:36.936043    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 16:56:36.936062    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 16:56:36.936072    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 16:56:36.936080    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:56:36.936088    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:56:36.936103    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 16:56:36.936112    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 16:56:36.936121    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 16:56:38.937406    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Attempt 14
	I0815 16:56:38.937424    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:56:38.937503    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5659
	I0815 16:56:38.938266    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Searching for 2:45:5:57:14:47 in /var/db/dhcpd_leases ...
	I0815 16:56:38.938316    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0815 16:56:38.938330    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 16:56:38.938341    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 16:56:38.938347    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 16:56:38.938363    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 16:56:38.938371    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 16:56:38.938377    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 16:56:38.938388    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 16:56:38.938399    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 16:56:38.938406    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 16:56:38.938412    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 16:56:38.938420    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 16:56:38.938426    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 16:56:38.938433    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 16:56:38.938440    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:56:38.938448    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:56:38.938458    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 16:56:38.938466    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 16:56:38.938474    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 16:56:40.938571    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Attempt 15
	I0815 16:56:40.938587    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:56:40.938649    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5659
	I0815 16:56:40.939683    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Searching for 2:45:5:57:14:47 in /var/db/dhcpd_leases ...
	I0815 16:56:40.939715    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0815 16:56:40.939724    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 16:56:40.939736    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 16:56:40.939743    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 16:56:40.939749    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 16:56:40.939757    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 16:56:40.939782    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 16:56:40.939793    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 16:56:40.939811    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 16:56:40.939824    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 16:56:40.939832    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 16:56:40.939840    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 16:56:40.939846    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 16:56:40.939855    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 16:56:40.939862    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:56:40.939871    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:56:40.939878    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 16:56:40.939885    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 16:56:40.939894    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 16:56:42.940753    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Attempt 16
	I0815 16:56:42.940766    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:56:42.940830    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5659
	I0815 16:56:42.941624    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Searching for 2:45:5:57:14:47 in /var/db/dhcpd_leases ...
	I0815 16:56:42.941666    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0815 16:56:42.941676    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 16:56:42.941697    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 16:56:42.941704    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 16:56:42.941710    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 16:56:42.941716    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 16:56:42.941723    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 16:56:42.941728    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 16:56:42.941756    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 16:56:42.941769    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 16:56:42.941777    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 16:56:42.941785    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 16:56:42.941792    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 16:56:42.941802    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 16:56:42.941811    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:56:42.941828    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:56:42.941836    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 16:56:42.941853    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 16:56:42.941870    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 16:56:44.942951    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Attempt 17
	I0815 16:56:44.942971    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:56:44.943041    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5659
	I0815 16:56:44.943811    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Searching for 2:45:5:57:14:47 in /var/db/dhcpd_leases ...
	I0815 16:56:44.943880    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0815 16:56:44.943893    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 16:56:44.943905    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 16:56:44.943915    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 16:56:44.943922    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 16:56:44.943928    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 16:56:44.943935    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 16:56:44.943941    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 16:56:44.943948    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 16:56:44.943956    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 16:56:44.943976    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 16:56:44.943988    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 16:56:44.943996    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 16:56:44.944004    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 16:56:44.944011    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:56:44.944024    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:56:44.944032    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 16:56:44.944040    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 16:56:44.944049    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 16:56:46.944584    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Attempt 18
	I0815 16:56:46.944599    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:56:46.944673    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5659
	I0815 16:56:46.945488    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Searching for 2:45:5:57:14:47 in /var/db/dhcpd_leases ...
	I0815 16:56:46.945530    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0815 16:56:46.945540    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 16:56:46.945551    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 16:56:46.945561    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 16:56:46.945568    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 16:56:46.945578    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 16:56:46.945588    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 16:56:46.945597    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 16:56:46.945604    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 16:56:46.945611    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 16:56:46.945620    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 16:56:46.945625    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 16:56:46.945637    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 16:56:46.945645    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 16:56:46.945665    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:56:46.945677    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:56:46.945685    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 16:56:46.945695    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 16:56:46.945733    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 16:56:48.947760    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Attempt 19
	I0815 16:56:48.947775    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:56:48.947840    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5659
	I0815 16:56:48.948775    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Searching for 2:45:5:57:14:47 in /var/db/dhcpd_leases ...
	I0815 16:56:48.948837    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0815 16:56:48.948848    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 16:56:48.948855    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 16:56:48.948862    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 16:56:48.948871    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 16:56:48.948877    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 16:56:48.948883    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 16:56:48.948889    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 16:56:48.948906    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 16:56:48.948922    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 16:56:48.948935    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 16:56:48.948943    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 16:56:48.948952    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 16:56:48.948959    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 16:56:48.948967    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:56:48.948987    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:56:48.949000    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 16:56:48.949016    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 16:56:48.949024    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 16:56:50.950513    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Attempt 20
	I0815 16:56:50.950524    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:56:50.950597    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5659
	I0815 16:56:50.951360    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Searching for 2:45:5:57:14:47 in /var/db/dhcpd_leases ...
	I0815 16:56:50.951401    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0815 16:56:50.951411    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 16:56:50.951420    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 16:56:50.951427    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 16:56:50.951436    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 16:56:50.951442    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 16:56:50.951449    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 16:56:50.951458    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 16:56:50.951468    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 16:56:50.951477    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 16:56:50.951486    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 16:56:50.951493    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 16:56:50.951500    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 16:56:50.951508    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 16:56:50.951514    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:56:50.951523    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:56:50.951529    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 16:56:50.951537    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 16:56:50.951545    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 16:56:52.951713    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Attempt 21
	I0815 16:56:52.951727    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:56:52.951782    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5659
	I0815 16:56:52.952683    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Searching for 2:45:5:57:14:47 in /var/db/dhcpd_leases ...
	I0815 16:56:52.952728    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0815 16:56:52.952742    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 16:56:52.952755    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 16:56:52.952774    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 16:56:52.952786    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 16:56:52.952795    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 16:56:52.952803    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 16:56:52.952811    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 16:56:52.952823    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 16:56:52.952834    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 16:56:52.952841    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 16:56:52.952846    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 16:56:52.952859    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 16:56:52.952872    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 16:56:52.952889    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:56:52.952895    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:56:52.952902    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 16:56:52.952915    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 16:56:52.952931    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 16:56:54.954913    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Attempt 22
	I0815 16:56:54.954926    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:56:54.955009    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5659
	I0815 16:56:54.955889    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Searching for 2:45:5:57:14:47 in /var/db/dhcpd_leases ...
	I0815 16:56:54.955918    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0815 16:56:54.955927    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 16:56:54.955936    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 16:56:54.955942    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 16:56:54.955961    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 16:56:54.955971    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 16:56:54.955988    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 16:56:54.956000    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 16:56:54.956009    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 16:56:54.956016    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 16:56:54.956042    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 16:56:54.956061    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 16:56:54.956070    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 16:56:54.956078    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 16:56:54.956088    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:56:54.956097    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:56:54.956103    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 16:56:54.956115    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 16:56:54.956132    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 16:56:56.958159    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Attempt 23
	I0815 16:56:56.958171    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:56:56.958237    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5659
	I0815 16:56:56.958993    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Searching for 2:45:5:57:14:47 in /var/db/dhcpd_leases ...
	I0815 16:56:56.959060    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0815 16:56:56.959073    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 16:56:56.959085    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 16:56:56.959095    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 16:56:56.959105    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 16:56:56.959111    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 16:56:56.959130    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 16:56:56.959141    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 16:56:56.959156    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 16:56:56.959169    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 16:56:56.959178    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 16:56:56.959198    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 16:56:56.959212    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 16:56:56.959225    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 16:56:56.959235    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:56:56.959243    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:56:56.959258    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 16:56:56.959272    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 16:56:56.959283    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 16:56:58.959269    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Attempt 24
	I0815 16:56:58.959280    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:56:58.959340    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5659
	I0815 16:56:58.960415    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Searching for 2:45:5:57:14:47 in /var/db/dhcpd_leases ...
	I0815 16:56:58.960461    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0815 16:56:58.960472    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 16:56:58.960479    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 16:56:58.960486    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 16:56:58.960495    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 16:56:58.960503    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 16:56:58.960513    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 16:56:58.960519    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 16:56:58.960526    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 16:56:58.960534    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 16:56:58.960540    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 16:56:58.960548    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 16:56:58.960556    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 16:56:58.960564    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 16:56:58.960571    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:56:58.960579    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:56:58.960596    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 16:56:58.960609    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 16:56:58.960623    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 16:57:00.961133    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Attempt 25
	I0815 16:57:00.961149    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:57:00.961205    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5659
	I0815 16:57:00.961988    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Searching for 2:45:5:57:14:47 in /var/db/dhcpd_leases ...
	I0815 16:57:00.962028    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0815 16:57:00.962048    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 16:57:00.962058    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 16:57:00.962065    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 16:57:00.962071    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 16:57:00.962078    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 16:57:00.962085    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 16:57:00.962101    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 16:57:00.962114    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 16:57:00.962122    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 16:57:00.962130    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 16:57:00.962146    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 16:57:00.962157    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 16:57:00.962166    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 16:57:00.962177    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:57:00.962189    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:57:00.962199    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 16:57:00.962206    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 16:57:00.962214    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 16:57:02.964239    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Attempt 26
	I0815 16:57:02.964253    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:57:02.964314    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5659
	I0815 16:57:02.965065    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Searching for 2:45:5:57:14:47 in /var/db/dhcpd_leases ...
	I0815 16:57:02.965113    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0815 16:57:02.965126    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 16:57:02.965135    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 16:57:02.965141    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 16:57:02.965150    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 16:57:02.965160    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 16:57:02.965168    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 16:57:02.965177    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 16:57:02.965184    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 16:57:02.965191    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 16:57:02.965207    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 16:57:02.965221    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 16:57:02.965228    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 16:57:02.965236    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 16:57:02.965244    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:57:02.965251    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:57:02.965259    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 16:57:02.965265    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 16:57:02.965281    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 16:57:04.967291    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Attempt 27
	I0815 16:57:04.967305    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:57:04.967412    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5659
	I0815 16:57:04.968233    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Searching for 2:45:5:57:14:47 in /var/db/dhcpd_leases ...
	I0815 16:57:04.968277    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0815 16:57:04.968288    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 16:57:04.968303    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 16:57:04.968314    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 16:57:04.968321    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 16:57:04.968329    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 16:57:04.968341    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 16:57:04.968357    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 16:57:04.968365    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 16:57:04.968382    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 16:57:04.968399    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 16:57:04.968414    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 16:57:04.968428    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 16:57:04.968436    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 16:57:04.968453    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:57:04.968464    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:57:04.968477    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 16:57:04.968493    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 16:57:04.968510    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 16:57:06.970494    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Attempt 28
	I0815 16:57:06.970504    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:57:06.970570    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5659
	I0815 16:57:06.971327    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Searching for 2:45:5:57:14:47 in /var/db/dhcpd_leases ...
	I0815 16:57:06.971371    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0815 16:57:06.971395    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 16:57:06.971402    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 16:57:06.971422    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 16:57:06.971435    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 16:57:06.971444    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 16:57:06.971450    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 16:57:06.971457    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 16:57:06.971465    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 16:57:06.971472    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 16:57:06.971480    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 16:57:06.971487    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 16:57:06.971493    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 16:57:06.971512    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 16:57:06.971523    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:57:06.971531    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:57:06.971538    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 16:57:06.971549    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 16:57:06.971558    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 16:57:08.972974    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Attempt 29
	I0815 16:57:08.972995    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:57:08.973067    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5659
	I0815 16:57:08.973821    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Searching for 2:45:5:57:14:47 in /var/db/dhcpd_leases ...
	I0815 16:57:08.973867    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0815 16:57:08.973881    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 16:57:08.973914    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 16:57:08.973927    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 16:57:08.973936    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 16:57:08.973947    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 16:57:08.973959    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 16:57:08.973971    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 16:57:08.973978    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 16:57:08.973994    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 16:57:08.974001    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 16:57:08.974007    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 16:57:08.974017    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 16:57:08.974025    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 16:57:08.974042    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:57:08.974054    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:57:08.974070    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 16:57:08.974083    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 16:57:08.974093    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 16:57:10.975591    5609 client.go:171] duration metric: took 1m1.555939858s to LocalClient.Create
	I0815 16:57:12.977018    5609 start.go:128] duration metric: took 1m3.588437023s to createHost
	I0815 16:57:12.977029    5609 start.go:83] releasing machines lock for "offline-docker-363000", held for 1m3.588526824s
	W0815 16:57:12.977057    5609 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 2:45:5:57:14:47
	I0815 16:57:12.977377    5609 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:57:12.977403    5609 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:57:12.986555    5609 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:54011
	I0815 16:57:12.986897    5609 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:57:12.987243    5609 main.go:141] libmachine: Using API Version  1
	I0815 16:57:12.987257    5609 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:57:12.987487    5609 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:57:12.987836    5609 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:57:12.987878    5609 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:57:12.996260    5609 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:54013
	I0815 16:57:12.996602    5609 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:57:12.996942    5609 main.go:141] libmachine: Using API Version  1
	I0815 16:57:12.996951    5609 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:57:12.997191    5609 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:57:12.997350    5609 main.go:141] libmachine: (offline-docker-363000) Calling .GetState
	I0815 16:57:12.997441    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:57:12.997519    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5659
	I0815 16:57:12.998450    5609 main.go:141] libmachine: (offline-docker-363000) Calling .DriverName
	I0815 16:57:13.056430    5609 out.go:177] * Deleting "offline-docker-363000" in hyperkit ...
	I0815 16:57:13.098441    5609 main.go:141] libmachine: (offline-docker-363000) Calling .Remove
	I0815 16:57:13.098591    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:57:13.098608    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:57:13.098651    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5659
	I0815 16:57:13.099585    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:57:13.099626    5609 main.go:141] libmachine: (offline-docker-363000) DBG | waiting for graceful shutdown
	I0815 16:57:14.101332    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:57:14.101406    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5659
	I0815 16:57:14.102292    5609 main.go:141] libmachine: (offline-docker-363000) DBG | waiting for graceful shutdown
	I0815 16:57:15.103135    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:57:15.103242    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5659
	I0815 16:57:15.104909    5609 main.go:141] libmachine: (offline-docker-363000) DBG | waiting for graceful shutdown
	I0815 16:57:16.107067    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:57:16.107129    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5659
	I0815 16:57:16.107870    5609 main.go:141] libmachine: (offline-docker-363000) DBG | waiting for graceful shutdown
	I0815 16:57:17.109286    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:57:17.109388    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5659
	I0815 16:57:17.109984    5609 main.go:141] libmachine: (offline-docker-363000) DBG | waiting for graceful shutdown
	I0815 16:57:18.110605    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:57:18.110664    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5659
	I0815 16:57:18.111714    5609 main.go:141] libmachine: (offline-docker-363000) DBG | sending sigkill
	I0815 16:57:18.111724    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:57:18.121731    5609 main.go:141] libmachine: (offline-docker-363000) DBG | 2024/08/15 16:57:18 WARN : hyperkit: failed to read stdout: EOF
	I0815 16:57:18.121748    5609 main.go:141] libmachine: (offline-docker-363000) DBG | 2024/08/15 16:57:18 WARN : hyperkit: failed to read stderr: EOF
	W0815 16:57:18.148232    5609 out.go:270] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 2:45:5:57:14:47
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 2:45:5:57:14:47
	I0815 16:57:18.148252    5609 start.go:729] Will try again in 5 seconds ...
	I0815 16:57:23.149678    5609 start.go:360] acquireMachinesLock for offline-docker-363000: {Name:mkac8034c8a60617271f2754a03dcaf74fc4fef7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 16:57:26.959442    5609 start.go:364] duration metric: took 3.809642444s to acquireMachinesLock for "offline-docker-363000"
	I0815 16:57:26.959474    5609 start.go:93] Provisioning new machine with config: &{Name:offline-docker-363000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-363000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 16:57:26.959540    5609 start.go:125] createHost starting for "" (driver="hyperkit")
	I0815 16:57:26.982780    5609 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0815 16:57:26.982851    5609 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:57:26.982880    5609 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:57:26.991681    5609 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:54038
	I0815 16:57:26.992034    5609 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:57:26.992381    5609 main.go:141] libmachine: Using API Version  1
	I0815 16:57:26.992399    5609 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:57:26.992601    5609 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:57:26.992723    5609 main.go:141] libmachine: (offline-docker-363000) Calling .GetMachineName
	I0815 16:57:26.992810    5609 main.go:141] libmachine: (offline-docker-363000) Calling .DriverName
	I0815 16:57:26.992914    5609 start.go:159] libmachine.API.Create for "offline-docker-363000" (driver="hyperkit")
	I0815 16:57:26.992936    5609 client.go:168] LocalClient.Create starting
	I0815 16:57:26.992966    5609 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem
	I0815 16:57:26.993025    5609 main.go:141] libmachine: Decoding PEM data...
	I0815 16:57:26.993036    5609 main.go:141] libmachine: Parsing certificate...
	I0815 16:57:26.993079    5609 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem
	I0815 16:57:26.993108    5609 main.go:141] libmachine: Decoding PEM data...
	I0815 16:57:26.993120    5609 main.go:141] libmachine: Parsing certificate...
	I0815 16:57:26.993134    5609 main.go:141] libmachine: Running pre-create checks...
	I0815 16:57:26.993139    5609 main.go:141] libmachine: (offline-docker-363000) Calling .PreCreateCheck
	I0815 16:57:26.993209    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:57:26.993233    5609 main.go:141] libmachine: (offline-docker-363000) Calling .GetConfigRaw
	I0815 16:57:27.004594    5609 main.go:141] libmachine: Creating machine...
	I0815 16:57:27.004610    5609 main.go:141] libmachine: (offline-docker-363000) Calling .Create
	I0815 16:57:27.004750    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:57:27.004943    5609 main.go:141] libmachine: (offline-docker-363000) DBG | I0815 16:57:27.004743    5759 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19452-977/.minikube
	I0815 16:57:27.005018    5609 main.go:141] libmachine: (offline-docker-363000) Downloading /Users/jenkins/minikube-integration/19452-977/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19452-977/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0815 16:57:27.190758    5609 main.go:141] libmachine: (offline-docker-363000) DBG | I0815 16:57:27.190681    5759 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/offline-docker-363000/id_rsa...
	I0815 16:57:27.330624    5609 main.go:141] libmachine: (offline-docker-363000) DBG | I0815 16:57:27.330542    5759 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/offline-docker-363000/offline-docker-363000.rawdisk...
	I0815 16:57:27.330637    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Writing magic tar header
	I0815 16:57:27.330652    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Writing SSH key tar header
	I0815 16:57:27.331260    5609 main.go:141] libmachine: (offline-docker-363000) DBG | I0815 16:57:27.331223    5759 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19452-977/.minikube/machines/offline-docker-363000 ...
	I0815 16:57:27.706710    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:57:27.706740    5609 main.go:141] libmachine: (offline-docker-363000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/offline-docker-363000/hyperkit.pid
	I0815 16:57:27.706758    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Using UUID 10647513-65e4-4a94-b70d-047f4eee4c8e
	I0815 16:57:27.734548    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Generated MAC a:c9:e4:13:77:83
	I0815 16:57:27.734566    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-363000
	I0815 16:57:27.734597    5609 main.go:141] libmachine: (offline-docker-363000) DBG | 2024/08/15 16:57:27 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/offline-docker-363000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"10647513-65e4-4a94-b70d-047f4eee4c8e", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/offline-docker-363000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/offline-docker-363000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/offline-docker-363000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"
", process:(*os.Process)(nil)}
	I0815 16:57:27.734627    5609 main.go:141] libmachine: (offline-docker-363000) DBG | 2024/08/15 16:57:27 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/offline-docker-363000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"10647513-65e4-4a94-b70d-047f4eee4c8e", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/offline-docker-363000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/offline-docker-363000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/offline-docker-363000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"
", process:(*os.Process)(nil)}
	I0815 16:57:27.734710    5609 main.go:141] libmachine: (offline-docker-363000) DBG | 2024/08/15 16:57:27 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19452-977/.minikube/machines/offline-docker-363000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "10647513-65e4-4a94-b70d-047f4eee4c8e", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/offline-docker-363000/offline-docker-363000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/offline-docker-363000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/offline-docker-363000/tty,log=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/offline-docker-363000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/offline-docker-363000/bzimage,/Users
/jenkins/minikube-integration/19452-977/.minikube/machines/offline-docker-363000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-363000"}
	I0815 16:57:27.734771    5609 main.go:141] libmachine: (offline-docker-363000) DBG | 2024/08/15 16:57:27 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19452-977/.minikube/machines/offline-docker-363000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 10647513-65e4-4a94-b70d-047f4eee4c8e -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/offline-docker-363000/offline-docker-363000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/offline-docker-363000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/offline-docker-363000/tty,log=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/offline-docker-363000/console-ring -f kexec,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/offline-docker-363000/bzimage,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/off
line-docker-363000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-363000"
	I0815 16:57:27.734784    5609 main.go:141] libmachine: (offline-docker-363000) DBG | 2024/08/15 16:57:27 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0815 16:57:27.737754    5609 main.go:141] libmachine: (offline-docker-363000) DBG | 2024/08/15 16:57:27 DEBUG: hyperkit: Pid is 5760
	I0815 16:57:27.738777    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Attempt 0
	I0815 16:57:27.738791    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:57:27.738874    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5760
	I0815 16:57:27.739762    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Searching for a:c9:e4:13:77:83 in /var/db/dhcpd_leases ...
	I0815 16:57:27.739866    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I0815 16:57:27.739882    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe762}
	I0815 16:57:27.739901    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 16:57:27.739917    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 16:57:27.739938    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 16:57:27.739965    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 16:57:27.739978    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 16:57:27.739992    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 16:57:27.740015    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 16:57:27.740025    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 16:57:27.740040    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 16:57:27.740050    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 16:57:27.740058    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 16:57:27.740075    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 16:57:27.740084    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 16:57:27.740095    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:57:27.740110    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:57:27.740119    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 16:57:27.740131    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 16:57:27.740149    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 16:57:27.745512    5609 main.go:141] libmachine: (offline-docker-363000) DBG | 2024/08/15 16:57:27 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0815 16:57:27.754683    5609 main.go:141] libmachine: (offline-docker-363000) DBG | 2024/08/15 16:57:27 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/offline-docker-363000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0815 16:57:27.755857    5609 main.go:141] libmachine: (offline-docker-363000) DBG | 2024/08/15 16:57:27 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0815 16:57:27.755882    5609 main.go:141] libmachine: (offline-docker-363000) DBG | 2024/08/15 16:57:27 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0815 16:57:27.755921    5609 main.go:141] libmachine: (offline-docker-363000) DBG | 2024/08/15 16:57:27 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0815 16:57:27.755946    5609 main.go:141] libmachine: (offline-docker-363000) DBG | 2024/08/15 16:57:27 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0815 16:57:28.137032    5609 main.go:141] libmachine: (offline-docker-363000) DBG | 2024/08/15 16:57:28 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0815 16:57:28.137046    5609 main.go:141] libmachine: (offline-docker-363000) DBG | 2024/08/15 16:57:28 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0815 16:57:28.252360    5609 main.go:141] libmachine: (offline-docker-363000) DBG | 2024/08/15 16:57:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0815 16:57:28.252389    5609 main.go:141] libmachine: (offline-docker-363000) DBG | 2024/08/15 16:57:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0815 16:57:28.252414    5609 main.go:141] libmachine: (offline-docker-363000) DBG | 2024/08/15 16:57:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0815 16:57:28.252431    5609 main.go:141] libmachine: (offline-docker-363000) DBG | 2024/08/15 16:57:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0815 16:57:28.253285    5609 main.go:141] libmachine: (offline-docker-363000) DBG | 2024/08/15 16:57:28 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0815 16:57:28.253295    5609 main.go:141] libmachine: (offline-docker-363000) DBG | 2024/08/15 16:57:28 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0815 16:57:29.741032    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Attempt 1
	I0815 16:57:29.741046    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:57:29.741154    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5760
	I0815 16:57:29.741924    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Searching for a:c9:e4:13:77:83 in /var/db/dhcpd_leases ...
	I0815 16:57:29.741984    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I0815 16:57:29.741992    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe762}
	I0815 16:57:29.742024    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 16:57:29.742037    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 16:57:29.742045    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 16:57:29.742062    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 16:57:29.742075    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 16:57:29.742083    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 16:57:29.742092    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 16:57:29.742100    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 16:57:29.742108    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 16:57:29.742116    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 16:57:29.742124    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 16:57:29.742132    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 16:57:29.742141    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 16:57:29.742150    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:57:29.742157    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:57:29.742163    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 16:57:29.742169    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 16:57:29.742177    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 16:57:31.742575    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Attempt 2
	I0815 16:57:31.742594    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:57:31.742667    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5760
	I0815 16:57:31.743468    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Searching for a:c9:e4:13:77:83 in /var/db/dhcpd_leases ...
	I0815 16:57:31.743536    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I0815 16:57:31.743552    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe762}
	I0815 16:57:31.743580    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 16:57:31.743592    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 16:57:31.743602    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 16:57:31.743610    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 16:57:31.743619    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 16:57:31.743627    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 16:57:31.743634    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 16:57:31.743664    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 16:57:31.743678    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 16:57:31.743686    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 16:57:31.743692    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 16:57:31.743700    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 16:57:31.743706    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 16:57:31.743715    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:57:31.743721    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:57:31.743737    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 16:57:31.743751    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 16:57:31.743770    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 16:57:33.745388    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Attempt 3
	I0815 16:57:33.745407    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:57:33.745519    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5760
	I0815 16:57:33.746492    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Searching for a:c9:e4:13:77:83 in /var/db/dhcpd_leases ...
	I0815 16:57:33.746550    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I0815 16:57:33.746565    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe762}
	I0815 16:57:33.746581    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 16:57:33.746597    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 16:57:33.746647    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 16:57:33.746673    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 16:57:33.746710    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 16:57:33.746729    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 16:57:33.746747    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 16:57:33.746763    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 16:57:33.746776    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 16:57:33.746786    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 16:57:33.746812    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 16:57:33.746865    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 16:57:33.746887    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 16:57:33.746909    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:57:33.746924    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:57:33.746971    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 16:57:33.746985    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 16:57:33.746996    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 16:57:33.964090    5609 main.go:141] libmachine: (offline-docker-363000) DBG | 2024/08/15 16:57:33 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0815 16:57:33.964221    5609 main.go:141] libmachine: (offline-docker-363000) DBG | 2024/08/15 16:57:33 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0815 16:57:33.964241    5609 main.go:141] libmachine: (offline-docker-363000) DBG | 2024/08/15 16:57:33 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0815 16:57:33.984611    5609 main.go:141] libmachine: (offline-docker-363000) DBG | 2024/08/15 16:57:33 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0815 16:57:35.748035    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Attempt 4
	I0815 16:57:35.748052    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:57:35.748153    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5760
	I0815 16:57:35.748933    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Searching for a:c9:e4:13:77:83 in /var/db/dhcpd_leases ...
	I0815 16:57:35.749003    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I0815 16:57:35.749013    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe762}
	I0815 16:57:35.749024    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 16:57:35.749037    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 16:57:35.749046    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 16:57:35.749051    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 16:57:35.749061    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 16:57:35.749069    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 16:57:35.749076    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 16:57:35.749081    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 16:57:35.749091    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 16:57:35.749105    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 16:57:35.749113    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 16:57:35.749123    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 16:57:35.749137    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 16:57:35.749152    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:57:35.749168    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:57:35.749177    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 16:57:35.749185    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 16:57:35.749192    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 16:57:37.750949    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Attempt 5
	I0815 16:57:37.750967    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:57:37.751048    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5760
	I0815 16:57:37.751939    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Searching for a:c9:e4:13:77:83 in /var/db/dhcpd_leases ...
	I0815 16:57:37.752013    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I0815 16:57:37.752031    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe762}
	I0815 16:57:37.752047    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 16:57:37.752062    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 16:57:37.752076    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 16:57:37.752087    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 16:57:37.752112    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 16:57:37.752138    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 16:57:37.752154    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 16:57:37.752164    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 16:57:37.752175    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 16:57:37.752187    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 16:57:37.752211    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 16:57:37.752222    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 16:57:37.752240    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 16:57:37.752256    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:57:37.752266    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:57:37.752276    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 16:57:37.752284    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 16:57:37.752294    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 16:57:39.753778    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Attempt 6
	I0815 16:57:39.753795    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:57:39.753895    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5760
	I0815 16:57:39.754663    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Searching for a:c9:e4:13:77:83 in /var/db/dhcpd_leases ...
	I0815 16:57:39.754718    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I0815 16:57:39.754729    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe762}
	I0815 16:57:39.754738    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 16:57:39.754745    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 16:57:39.754761    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 16:57:39.754767    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 16:57:39.754774    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 16:57:39.754780    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 16:57:39.754786    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 16:57:39.754795    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 16:57:39.754809    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 16:57:39.754818    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 16:57:39.754833    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 16:57:39.754848    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 16:57:39.754857    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 16:57:39.754877    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:57:39.754885    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:57:39.754893    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 16:57:39.754901    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 16:57:39.754909    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 16:57:41.754917    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Attempt 7
	I0815 16:57:41.754934    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:57:41.755040    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5760
	I0815 16:57:41.755864    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Searching for a:c9:e4:13:77:83 in /var/db/dhcpd_leases ...
	I0815 16:57:41.755907    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I0815 16:57:41.755944    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe762}
	I0815 16:57:41.755953    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 16:57:41.755961    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 16:57:41.755976    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 16:57:41.755986    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 16:57:41.755994    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 16:57:41.756000    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 16:57:41.756008    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 16:57:41.756019    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 16:57:41.756026    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 16:57:41.756033    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 16:57:41.756042    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 16:57:41.756056    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 16:57:41.756070    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 16:57:41.756090    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:57:41.756099    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:57:41.756129    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 16:57:41.756144    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 16:57:41.756157    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 16:57:43.756556    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Attempt 8
	I0815 16:57:43.756573    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:57:43.756587    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5760
	I0815 16:57:43.757650    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Searching for a:c9:e4:13:77:83 in /var/db/dhcpd_leases ...
	I0815 16:57:43.757693    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I0815 16:57:43.757703    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe762}
	I0815 16:57:43.757720    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 16:57:43.757735    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 16:57:43.757747    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 16:57:43.757755    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 16:57:43.757763    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 16:57:43.757782    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 16:57:43.757793    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 16:57:43.757802    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 16:57:43.757813    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 16:57:43.757822    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 16:57:43.757832    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 16:57:43.757841    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 16:57:43.757853    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 16:57:43.757865    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:57:43.757876    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:57:43.757883    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 16:57:43.757895    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 16:57:43.757907    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 16:57:45.759471    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Attempt 9
	I0815 16:57:45.759486    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:57:45.759565    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5760
	I0815 16:57:45.760408    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Searching for a:c9:e4:13:77:83 in /var/db/dhcpd_leases ...
	I0815 16:57:45.760440    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I0815 16:57:45.760447    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe762}
	I0815 16:57:45.760456    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 16:57:45.760465    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 16:57:45.760473    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 16:57:45.760480    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 16:57:45.760486    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 16:57:45.760498    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 16:57:45.760509    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 16:57:45.760525    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 16:57:45.760539    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 16:57:45.760547    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 16:57:45.760555    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 16:57:45.760573    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 16:57:45.760584    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 16:57:45.760601    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:57:45.760610    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:57:45.760617    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 16:57:45.760625    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 16:57:45.760642    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 16:57:47.760666    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Attempt 10
	I0815 16:57:47.760689    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:57:47.760784    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5760
	I0815 16:57:47.761552    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Searching for a:c9:e4:13:77:83 in /var/db/dhcpd_leases ...
	I0815 16:57:47.761622    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I0815 16:57:47.761633    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe762}
	I0815 16:57:47.761656    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 16:57:47.761666    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 16:57:47.761673    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 16:57:47.761679    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 16:57:47.761696    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 16:57:47.761705    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 16:57:47.761712    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 16:57:47.761720    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 16:57:47.761729    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 16:57:47.761742    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 16:57:47.761757    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 16:57:47.761764    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 16:57:47.761771    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 16:57:47.761780    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:57:47.761787    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:57:47.761794    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 16:57:47.761802    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 16:57:47.761810    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 16:57:49.761968    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Attempt 11
	I0815 16:57:49.761988    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:57:49.762043    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5760
	I0815 16:57:49.762896    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Searching for a:c9:e4:13:77:83 in /var/db/dhcpd_leases ...
	I0815 16:57:49.762963    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I0815 16:57:49.762977    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe762}
	I0815 16:57:49.762993    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 16:57:49.763005    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 16:57:49.763062    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 16:57:49.763077    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 16:57:49.763095    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 16:57:49.763105    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 16:57:49.763114    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 16:57:49.763126    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 16:57:49.763138    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 16:57:49.763152    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 16:57:49.763184    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 16:57:49.763197    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 16:57:49.763209    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 16:57:49.763219    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:57:49.763228    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:57:49.763235    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 16:57:49.763249    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 16:57:49.763258    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 16:57:51.765069    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Attempt 12
	I0815 16:57:51.765092    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:57:51.765197    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5760
	I0815 16:57:51.766049    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Searching for a:c9:e4:13:77:83 in /var/db/dhcpd_leases ...
	I0815 16:57:51.766110    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I0815 16:57:51.766133    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe762}
	I0815 16:57:51.766150    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 16:57:51.766172    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 16:57:51.766186    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 16:57:51.766193    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 16:57:51.766209    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 16:57:51.766219    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 16:57:51.766229    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 16:57:51.766238    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 16:57:51.766250    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 16:57:51.766259    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 16:57:51.766270    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 16:57:51.766278    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 16:57:51.766285    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 16:57:51.766291    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:57:51.766316    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:57:51.766327    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 16:57:51.766335    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 16:57:51.766344    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 16:57:53.768331    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Attempt 13
	I0815 16:57:53.768348    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:57:53.768412    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5760
	I0815 16:57:53.769184    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Searching for a:c9:e4:13:77:83 in /var/db/dhcpd_leases ...
	I0815 16:57:53.769238    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I0815 16:57:53.769246    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe762}
	I0815 16:57:53.769256    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 16:57:53.769262    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 16:57:53.769269    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 16:57:53.769274    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 16:57:53.769281    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 16:57:53.769287    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 16:57:53.769306    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 16:57:53.769314    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 16:57:53.769320    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 16:57:53.769327    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 16:57:53.769345    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 16:57:53.769359    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 16:57:53.769367    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 16:57:53.769375    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:57:53.769382    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:57:53.769388    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 16:57:53.769403    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 16:57:53.769410    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 16:57:55.770493    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Attempt 14
	I0815 16:57:55.770519    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:57:55.770615    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5760
	I0815 16:57:55.771445    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Searching for a:c9:e4:13:77:83 in /var/db/dhcpd_leases ...
	I0815 16:57:55.771509    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I0815 16:57:55.771518    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe762}
	I0815 16:57:55.771526    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 16:57:55.771533    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 16:57:55.771547    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 16:57:55.771560    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 16:57:55.771578    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 16:57:55.771590    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 16:57:55.771629    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 16:57:55.771646    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 16:57:55.771657    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 16:57:55.771666    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 16:57:55.771674    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 16:57:55.771682    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 16:57:55.771689    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 16:57:55.771696    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:57:55.771707    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:57:55.771716    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 16:57:55.771724    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 16:57:55.771733    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 16:57:57.772032    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Attempt 15
	I0815 16:57:57.772049    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:57:57.772113    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5760
	I0815 16:57:57.772900    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Searching for a:c9:e4:13:77:83 in /var/db/dhcpd_leases ...
	I0815 16:57:57.772963    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I0815 16:57:57.772972    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe762}
	I0815 16:57:57.772983    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 16:57:57.772992    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 16:57:57.773004    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 16:57:57.773011    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 16:57:57.773018    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 16:57:57.773024    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 16:57:57.773032    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 16:57:57.773041    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 16:57:57.773056    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 16:57:57.773069    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 16:57:57.773077    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 16:57:57.773085    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 16:57:57.773092    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 16:57:57.773100    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:57:57.773108    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:57:57.773116    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 16:57:57.773133    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 16:57:57.773142    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 16:57:59.773839    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Attempt 16
	I0815 16:57:59.773853    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:57:59.773947    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5760
	I0815 16:57:59.774716    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Searching for a:c9:e4:13:77:83 in /var/db/dhcpd_leases ...
	I0815 16:57:59.774762    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I0815 16:57:59.774771    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe762}
	I0815 16:57:59.774779    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 16:57:59.774785    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 16:57:59.774800    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 16:57:59.774814    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 16:57:59.774822    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 16:57:59.774837    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 16:57:59.774850    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 16:57:59.774859    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 16:57:59.774866    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 16:57:59.774874    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 16:57:59.774890    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 16:57:59.774901    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 16:57:59.774911    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 16:57:59.774919    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:57:59.774928    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:57:59.774936    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 16:57:59.774943    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 16:57:59.774950    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 16:58:01.775095    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Attempt 17
	I0815 16:58:01.775112    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:58:01.775185    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5760
	I0815 16:58:01.775962    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Searching for a:c9:e4:13:77:83 in /var/db/dhcpd_leases ...
	I0815 16:58:01.776018    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I0815 16:58:01.776030    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe762}
	I0815 16:58:01.776040    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 16:58:01.776059    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 16:58:01.776079    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 16:58:01.776087    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 16:58:01.776095    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 16:58:01.776103    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 16:58:01.776111    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 16:58:01.776119    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 16:58:01.776127    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 16:58:01.776135    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 16:58:01.776142    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 16:58:01.776148    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 16:58:01.776171    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 16:58:01.776184    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:58:01.776194    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:58:01.776200    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 16:58:01.776207    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 16:58:01.776215    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 16:58:03.776670    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Attempt 18
	I0815 16:58:03.776684    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:58:03.776782    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5760
	I0815 16:58:03.777580    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Searching for a:c9:e4:13:77:83 in /var/db/dhcpd_leases ...
	I0815 16:58:03.777610    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I0815 16:58:03.777617    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe762}
	I0815 16:58:03.777640    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 16:58:03.777651    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 16:58:03.777658    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 16:58:03.777665    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 16:58:03.777674    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 16:58:03.777681    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 16:58:03.777699    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 16:58:03.777706    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 16:58:03.777713    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 16:58:03.777721    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 16:58:03.777732    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 16:58:03.777742    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 16:58:03.777752    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 16:58:03.777760    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:58:03.777767    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:58:03.777775    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 16:58:03.777783    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 16:58:03.777791    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 16:58:05.777999    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Attempt 19
	I0815 16:58:05.778012    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:58:05.778086    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5760
	I0815 16:58:05.778858    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Searching for a:c9:e4:13:77:83 in /var/db/dhcpd_leases ...
	I0815 16:58:05.778912    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I0815 16:58:05.778923    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe762}
	I0815 16:58:05.778932    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 16:58:05.778941    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 16:58:05.778948    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 16:58:05.778955    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 16:58:05.778966    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 16:58:05.778977    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 16:58:05.778985    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 16:58:05.778991    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 16:58:05.779002    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 16:58:05.779016    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 16:58:05.779032    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 16:58:05.779038    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 16:58:05.779046    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 16:58:05.779051    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:58:05.779065    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:58:05.779072    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 16:58:05.779079    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 16:58:05.779084    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 16:58:07.781115    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Attempt 20
	I0815 16:58:07.781129    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:58:07.781202    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5760
	I0815 16:58:07.782022    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Searching for a:c9:e4:13:77:83 in /var/db/dhcpd_leases ...
	I0815 16:58:07.782063    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I0815 16:58:07.782072    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66be960d}
	I0815 16:58:07.782082    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 16:58:07.782090    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 16:58:07.782097    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 16:58:07.782103    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 16:58:07.782119    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 16:58:07.782128    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 16:58:07.782135    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 16:58:07.782143    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 16:58:07.782166    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 16:58:07.782180    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 16:58:07.782196    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 16:58:07.782208    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 16:58:07.782216    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 16:58:07.782234    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:58:07.782253    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:58:07.782265    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 16:58:07.782272    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 16:58:07.782280    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 16:58:09.783973    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Attempt 21
	I0815 16:58:09.783992    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:58:09.784047    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5760
	I0815 16:58:09.784828    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Searching for a:c9:e4:13:77:83 in /var/db/dhcpd_leases ...
	I0815 16:58:09.784893    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I0815 16:58:09.784904    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66be960d}
	I0815 16:58:09.784914    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 16:58:09.784920    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 16:58:09.784930    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 16:58:09.784936    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 16:58:09.784953    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 16:58:09.784962    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 16:58:09.784984    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 16:58:09.784997    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 16:58:09.785006    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 16:58:09.785013    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 16:58:09.785020    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 16:58:09.785029    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 16:58:09.785037    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 16:58:09.785045    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:58:09.785054    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:58:09.785063    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 16:58:09.785089    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 16:58:09.785104    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 16:58:11.785240    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Attempt 22
	I0815 16:58:11.785252    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:58:11.785329    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5760
	I0815 16:58:11.786309    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Searching for a:c9:e4:13:77:83 in /var/db/dhcpd_leases ...
	I0815 16:58:11.786364    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I0815 16:58:11.786372    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66be960d}
	I0815 16:58:11.786380    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 16:58:11.786387    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 16:58:11.786401    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 16:58:11.786415    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 16:58:11.786426    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 16:58:11.786435    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 16:58:11.786443    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 16:58:11.786451    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 16:58:11.786464    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 16:58:11.786474    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 16:58:11.786485    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 16:58:11.786493    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 16:58:11.786506    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 16:58:11.786520    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:58:11.786529    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:58:11.786537    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 16:58:11.786544    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 16:58:11.786551    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 16:58:13.788366    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Attempt 23
	I0815 16:58:13.788378    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:58:13.788420    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5760
	I0815 16:58:13.789368    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Searching for a:c9:e4:13:77:83 in /var/db/dhcpd_leases ...
	I0815 16:58:13.789411    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I0815 16:58:13.789423    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66be960d}
	I0815 16:58:13.789450    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 16:58:13.789459    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 16:58:13.789486    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 16:58:13.789507    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 16:58:13.789522    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 16:58:13.789534    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 16:58:13.789543    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 16:58:13.789552    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 16:58:13.789562    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 16:58:13.789570    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 16:58:13.789577    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 16:58:13.789589    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 16:58:13.789598    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 16:58:13.789606    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:58:13.789613    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:58:13.789621    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 16:58:13.789628    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 16:58:13.789636    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 16:58:15.791668    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Attempt 24
	I0815 16:58:15.791683    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:58:15.791750    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5760
	I0815 16:58:15.792530    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Searching for a:c9:e4:13:77:83 in /var/db/dhcpd_leases ...
	I0815 16:58:15.792632    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I0815 16:58:15.792642    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66be960d}
	I0815 16:58:15.792664    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 16:58:15.792688    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 16:58:15.792700    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 16:58:15.792708    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 16:58:15.792715    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 16:58:15.792744    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 16:58:15.792760    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 16:58:15.792772    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 16:58:15.792781    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 16:58:15.792790    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 16:58:15.792797    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 16:58:15.792804    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 16:58:15.792811    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 16:58:15.792828    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:58:15.792839    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:58:15.792854    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 16:58:15.792864    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 16:58:15.792881    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 16:58:17.793437    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Attempt 25
	I0815 16:58:17.793452    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:58:17.793512    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5760
	I0815 16:58:17.794306    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Searching for a:c9:e4:13:77:83 in /var/db/dhcpd_leases ...
	I0815 16:58:17.794347    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I0815 16:58:17.794357    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66be960d}
	I0815 16:58:17.794365    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 16:58:17.794371    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 16:58:17.794390    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 16:58:17.794403    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 16:58:17.794410    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 16:58:17.794418    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 16:58:17.794426    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 16:58:17.794433    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 16:58:17.794444    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 16:58:17.794457    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 16:58:17.794466    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 16:58:17.794475    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 16:58:17.794490    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 16:58:17.794500    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:58:17.794515    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:58:17.794523    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 16:58:17.794533    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 16:58:17.794542    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 16:58:19.795780    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Attempt 26
	I0815 16:58:19.795792    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:58:19.795864    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5760
	I0815 16:58:19.796654    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Searching for a:c9:e4:13:77:83 in /var/db/dhcpd_leases ...
	I0815 16:58:19.796707    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I0815 16:58:19.796718    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66be960d}
	I0815 16:58:19.796747    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 16:58:19.796760    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 16:58:19.796776    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 16:58:19.796783    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 16:58:19.796789    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 16:58:19.796798    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 16:58:19.796812    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 16:58:19.796825    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 16:58:19.796839    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 16:58:19.796849    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 16:58:19.796857    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 16:58:19.796865    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 16:58:19.796878    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 16:58:19.796889    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:58:19.796902    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:58:19.796913    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 16:58:19.796923    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 16:58:19.796929    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 16:58:21.797678    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Attempt 27
	I0815 16:58:21.797695    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:58:21.797736    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5760
	I0815 16:58:21.798554    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Searching for a:c9:e4:13:77:83 in /var/db/dhcpd_leases ...
	I0815 16:58:21.798603    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I0815 16:58:21.798614    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66be960d}
	I0815 16:58:21.798622    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 16:58:21.798628    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 16:58:21.798636    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 16:58:21.798643    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 16:58:21.798682    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 16:58:21.798719    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 16:58:21.798728    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 16:58:21.798736    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 16:58:21.798758    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 16:58:21.798770    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 16:58:21.798778    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 16:58:21.798786    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 16:58:21.798802    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 16:58:21.798815    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:58:21.798836    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:58:21.798842    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 16:58:21.798881    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 16:58:21.798892    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 16:58:23.799052    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Attempt 28
	I0815 16:58:23.799068    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:58:23.799139    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5760
	I0815 16:58:23.799919    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Searching for a:c9:e4:13:77:83 in /var/db/dhcpd_leases ...
	I0815 16:58:23.799984    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I0815 16:58:23.800001    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66be960d}
	I0815 16:58:23.800041    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 16:58:23.800054    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 16:58:23.800068    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 16:58:23.800080    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 16:58:23.800096    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 16:58:23.800129    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 16:58:23.800139    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 16:58:23.800147    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 16:58:23.800153    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 16:58:23.800162    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 16:58:23.800169    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 16:58:23.800178    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 16:58:23.800186    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 16:58:23.800199    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:58:23.800207    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:58:23.800222    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 16:58:23.800242    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 16:58:23.800253    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 16:58:25.801339    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Attempt 29
	I0815 16:58:25.801353    5609 main.go:141] libmachine: (offline-docker-363000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:58:25.801458    5609 main.go:141] libmachine: (offline-docker-363000) DBG | hyperkit pid from json: 5760
	I0815 16:58:25.802298    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Searching for a:c9:e4:13:77:83 in /var/db/dhcpd_leases ...
	I0815 16:58:25.802342    5609 main.go:141] libmachine: (offline-docker-363000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I0815 16:58:25.802353    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66be960d}
	I0815 16:58:25.802379    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 16:58:25.802396    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 16:58:25.802409    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 16:58:25.802418    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 16:58:25.802427    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 16:58:25.802435    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 16:58:25.802442    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 16:58:25.802450    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 16:58:25.802457    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 16:58:25.802465    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 16:58:25.802473    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 16:58:25.802481    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 16:58:25.802489    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 16:58:25.802496    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:58:25.802504    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:58:25.802512    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 16:58:25.802519    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 16:58:25.802527    5609 main.go:141] libmachine: (offline-docker-363000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 16:58:27.804560    5609 client.go:171] duration metric: took 1m0.810428616s to LocalClient.Create
	I0815 16:58:29.806712    5609 start.go:128] duration metric: took 1m2.845934783s to createHost
	I0815 16:58:29.806776    5609 start.go:83] releasing machines lock for "offline-docker-363000", held for 1m2.8460552s
	W0815 16:58:29.806838    5609 out.go:270] * Failed to start hyperkit VM. Running "minikube delete -p offline-docker-363000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for a:c9:e4:13:77:83
	* Failed to start hyperkit VM. Running "minikube delete -p offline-docker-363000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for a:c9:e4:13:77:83
	I0815 16:58:29.849037    5609 out.go:201] 
	W0815 16:58:29.912066    5609 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for a:c9:e4:13:77:83
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for a:c9:e4:13:77:83
	W0815 16:58:29.912082    5609 out.go:270] * 
	* 
	W0815 16:58:29.912744    5609 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 16:58:29.975203    5609 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-amd64 start -p offline-docker-363000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-08-15 16:58:30.094217 -0700 PDT m=+3217.028377794
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-363000 -n offline-docker-363000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-363000 -n offline-docker-363000: exit status 7 (85.230932ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 16:58:30.177381    5830 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0815 16:58:30.177403    5830 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-363000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "offline-docker-363000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-363000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-363000: (5.236288731s)
--- FAIL: TestOffline (146.61s)

                                                
                                    
x
+
TestCertOptions (141.88s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-768000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit 
E0815 17:12:21.121921    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/functional-506000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:12:38.045908    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/functional-506000/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-options-768000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit : exit status 80 (2m16.201835298s)

                                                
                                                
-- stdout --
	* [cert-options-768000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-977/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-977/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "cert-options-768000" primary control-plane node in "cert-options-768000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "cert-options-768000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 22:42:af:8c:13:8e
	* Failed to start hyperkit VM. Running "minikube delete -p cert-options-768000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 32:94:31:56:72:2b
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 32:94:31:56:72:2b
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-amd64 start -p cert-options-768000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-768000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p cert-options-768000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 50 (160.747492ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node cert-options-768000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-amd64 -p cert-options-768000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 50
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-768000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-768000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p cert-options-768000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 50 (160.676676ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node cert-options-768000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-amd64 ssh -p cert-options-768000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 50
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node cert-options-768000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-08-15 17:13:46.424022 -0700 PDT m=+4133.300358991
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p cert-options-768000 -n cert-options-768000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p cert-options-768000 -n cert-options-768000: exit status 7 (78.001925ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 17:13:46.500080    6833 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0815 17:13:46.500103    6833 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-768000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "cert-options-768000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-768000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-768000: (5.235487316s)
--- FAIL: TestCertOptions (141.88s)

                                                
                                    
x
+
TestCertExpiration (1788.74s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-926000 --memory=2048 --cert-expiration=3m --driver=hyperkit 
E0815 17:06:20.273467    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/skaffold-329000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:07:38.024787    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/functional-506000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:08:57.859477    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/addons-640000/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-expiration-926000 --memory=2048 --cert-expiration=3m --driver=hyperkit : exit status 80 (4m9.271065239s)

                                                
                                                
-- stdout --
	* [cert-expiration-926000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-977/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-977/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "cert-expiration-926000" primary control-plane node in "cert-expiration-926000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "cert-expiration-926000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 6e:22:ca:21:8:66
	* Failed to start hyperkit VM. Running "minikube delete -p cert-expiration-926000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ce:b2:b2:36:d:b
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ce:b2:b2:36:d:b
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-amd64 start -p cert-expiration-926000 --memory=2048 --cert-expiration=3m --driver=hyperkit " : exit status 80
E0815 17:10:52.600897    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/skaffold-329000/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-926000 --memory=2048 --cert-expiration=8760h --driver=hyperkit 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-expiration-926000 --memory=2048 --cert-expiration=8760h --driver=hyperkit : exit status 80 (22m34.092342897s)

                                                
                                                
-- stdout --
	* [cert-expiration-926000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-977/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-977/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "cert-expiration-926000" primary control-plane node in "cert-expiration-926000" cluster
	* Updating the running hyperkit "cert-expiration-926000" VM ...
	* Updating the running hyperkit "cert-expiration-926000" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	* Failed to start hyperkit VM. Running "minikube delete -p cert-expiration-926000" may fix it: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-amd64 start -p cert-expiration-926000 --memory=2048 --cert-expiration=8760h --driver=hyperkit " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-926000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-977/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-977/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "cert-expiration-926000" primary control-plane node in "cert-expiration-926000" cluster
	* Updating the running hyperkit "cert-expiration-926000" VM ...
	* Updating the running hyperkit "cert-expiration-926000" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	* Failed to start hyperkit VM. Running "minikube delete -p cert-expiration-926000" may fix it: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-08-15 17:35:38.367981 -0700 PDT m=+5445.201591595
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p cert-expiration-926000 -n cert-expiration-926000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p cert-expiration-926000 -n cert-expiration-926000: exit status 7 (88.77976ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 17:35:38.454508    8483 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0815 17:35:38.454531    8483 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-926000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "cert-expiration-926000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-926000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-926000: (5.290926383s)
--- FAIL: TestCertExpiration (1788.74s)

                                                
                                    
x
+
TestDockerFlags (143.28s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-943000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p docker-flags-943000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit : exit status 80 (2m17.608442773s)

                                                
                                                
-- stdout --
	* [docker-flags-943000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-977/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-977/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "docker-flags-943000" primary control-plane node in "docker-flags-943000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "docker-flags-943000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 17:09:06.624929    6707 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:09:06.625212    6707 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:09:06.625217    6707 out.go:358] Setting ErrFile to fd 2...
	I0815 17:09:06.625221    6707 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:09:06.625414    6707 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-977/.minikube/bin
	I0815 17:09:06.626968    6707 out.go:352] Setting JSON to false
	I0815 17:09:06.650302    6707 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":4117,"bootTime":1723762829,"procs":440,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0815 17:09:06.650394    6707 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 17:09:06.672275    6707 out.go:177] * [docker-flags-943000] minikube v1.33.1 on Darwin 14.6.1
	I0815 17:09:06.714658    6707 out.go:177]   - MINIKUBE_LOCATION=19452
	I0815 17:09:06.714663    6707 notify.go:220] Checking for updates...
	I0815 17:09:06.757765    6707 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19452-977/kubeconfig
	I0815 17:09:06.778562    6707 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0815 17:09:06.799689    6707 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 17:09:06.820702    6707 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-977/.minikube
	I0815 17:09:06.841641    6707 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 17:09:06.863258    6707 config.go:182] Loaded profile config "cert-expiration-926000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 17:09:06.863368    6707 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 17:09:06.891721    6707 out.go:177] * Using the hyperkit driver based on user configuration
	I0815 17:09:06.933737    6707 start.go:297] selected driver: hyperkit
	I0815 17:09:06.933751    6707 start.go:901] validating driver "hyperkit" against <nil>
	I0815 17:09:06.933763    6707 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 17:09:06.936817    6707 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:09:06.936930    6707 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19452-977/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0815 17:09:06.945440    6707 install.go:137] /Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit version is 1.33.1
	I0815 17:09:06.949372    6707 install.go:79] stdout: /Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit
	I0815 17:09:06.949396    6707 install.go:81] /Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit looks good
	I0815 17:09:06.949438    6707 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 17:09:06.949662    6707 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0815 17:09:06.949694    6707 cni.go:84] Creating CNI manager for ""
	I0815 17:09:06.949711    6707 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0815 17:09:06.949715    6707 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0815 17:09:06.949774    6707 start.go:340] cluster config:
	{Name:docker-flags-943000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-943000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientP
ath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:09:06.949866    6707 iso.go:125] acquiring lock: {Name:mkb93fc66b60be15dffa9eb2999a4be8d8d4d218 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:09:06.991490    6707 out.go:177] * Starting "docker-flags-943000" primary control-plane node in "docker-flags-943000" cluster
	I0815 17:09:07.011673    6707 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 17:09:07.011704    6707 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19452-977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0815 17:09:07.011720    6707 cache.go:56] Caching tarball of preloaded images
	I0815 17:09:07.011834    6707 preload.go:172] Found /Users/jenkins/minikube-integration/19452-977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0815 17:09:07.011844    6707 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 17:09:07.011927    6707 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/docker-flags-943000/config.json ...
	I0815 17:09:07.011943    6707 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/docker-flags-943000/config.json: {Name:mk966e44b12eb247c57400131685bc676aade89a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:09:07.012254    6707 start.go:360] acquireMachinesLock for docker-flags-943000: {Name:mkac8034c8a60617271f2754a03dcaf74fc4fef7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 17:10:04.016437    6707 start.go:364] duration metric: took 57.002990511s to acquireMachinesLock for "docker-flags-943000"
	I0815 17:10:04.016514    6707 start.go:93] Provisioning new machine with config: &{Name:docker-flags-943000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSH
Key: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-943000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 17:10:04.016593    6707 start.go:125] createHost starting for "" (driver="hyperkit")
	I0815 17:10:04.058517    6707 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0815 17:10:04.058663    6707 main.go:141] libmachine: Found binary path at /Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit
	I0815 17:10:04.058700    6707 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 17:10:04.067636    6707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:54689
	I0815 17:10:04.068011    6707 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:10:04.068494    6707 main.go:141] libmachine: Using API Version  1
	I0815 17:10:04.068503    6707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:10:04.068723    6707 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:10:04.068845    6707 main.go:141] libmachine: (docker-flags-943000) Calling .GetMachineName
	I0815 17:10:04.068973    6707 main.go:141] libmachine: (docker-flags-943000) Calling .DriverName
	I0815 17:10:04.069081    6707 start.go:159] libmachine.API.Create for "docker-flags-943000" (driver="hyperkit")
	I0815 17:10:04.069101    6707 client.go:168] LocalClient.Create starting
	I0815 17:10:04.069130    6707 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem
	I0815 17:10:04.069179    6707 main.go:141] libmachine: Decoding PEM data...
	I0815 17:10:04.069194    6707 main.go:141] libmachine: Parsing certificate...
	I0815 17:10:04.069251    6707 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem
	I0815 17:10:04.069288    6707 main.go:141] libmachine: Decoding PEM data...
	I0815 17:10:04.069302    6707 main.go:141] libmachine: Parsing certificate...
	I0815 17:10:04.069317    6707 main.go:141] libmachine: Running pre-create checks...
	I0815 17:10:04.069328    6707 main.go:141] libmachine: (docker-flags-943000) Calling .PreCreateCheck
	I0815 17:10:04.069437    6707 main.go:141] libmachine: (docker-flags-943000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:10:04.069610    6707 main.go:141] libmachine: (docker-flags-943000) Calling .GetConfigRaw
	I0815 17:10:04.079704    6707 main.go:141] libmachine: Creating machine...
	I0815 17:10:04.079713    6707 main.go:141] libmachine: (docker-flags-943000) Calling .Create
	I0815 17:10:04.079843    6707 main.go:141] libmachine: (docker-flags-943000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:10:04.080013    6707 main.go:141] libmachine: (docker-flags-943000) DBG | I0815 17:10:04.079833    6727 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19452-977/.minikube
	I0815 17:10:04.080074    6707 main.go:141] libmachine: (docker-flags-943000) Downloading /Users/jenkins/minikube-integration/19452-977/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19452-977/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0815 17:10:04.410756    6707 main.go:141] libmachine: (docker-flags-943000) DBG | I0815 17:10:04.410651    6727 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/docker-flags-943000/id_rsa...
	I0815 17:10:04.461302    6707 main.go:141] libmachine: (docker-flags-943000) DBG | I0815 17:10:04.461240    6727 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/docker-flags-943000/docker-flags-943000.rawdisk...
	I0815 17:10:04.461317    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Writing magic tar header
	I0815 17:10:04.461326    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Writing SSH key tar header
	I0815 17:10:04.461728    6707 main.go:141] libmachine: (docker-flags-943000) DBG | I0815 17:10:04.461683    6727 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19452-977/.minikube/machines/docker-flags-943000 ...
	I0815 17:10:04.841466    6707 main.go:141] libmachine: (docker-flags-943000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:10:04.841519    6707 main.go:141] libmachine: (docker-flags-943000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/docker-flags-943000/hyperkit.pid
	I0815 17:10:04.841532    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Using UUID fd29479f-b23d-4d7b-ad91-2216df62d905
	I0815 17:10:04.866781    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Generated MAC a2:50:f7:1c:a1:76
	I0815 17:10:04.866797    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-943000
	I0815 17:10:04.866850    6707 main.go:141] libmachine: (docker-flags-943000) DBG | 2024/08/15 17:10:04 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/docker-flags-943000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"fd29479f-b23d-4d7b-ad91-2216df62d905", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000122330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/docker-flags-943000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/docker-flags-943000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/docker-flags-943000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process
:(*os.Process)(nil)}
	I0815 17:10:04.866893    6707 main.go:141] libmachine: (docker-flags-943000) DBG | 2024/08/15 17:10:04 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/docker-flags-943000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"fd29479f-b23d-4d7b-ad91-2216df62d905", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000122330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/docker-flags-943000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/docker-flags-943000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/docker-flags-943000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process
:(*os.Process)(nil)}
	I0815 17:10:04.867005    6707 main.go:141] libmachine: (docker-flags-943000) DBG | 2024/08/15 17:10:04 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19452-977/.minikube/machines/docker-flags-943000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "fd29479f-b23d-4d7b-ad91-2216df62d905", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/docker-flags-943000/docker-flags-943000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/docker-flags-943000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/docker-flags-943000/tty,log=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/docker-flags-943000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/docker-flags-943000/bzimage,/Users/jenkins/minikub
e-integration/19452-977/.minikube/machines/docker-flags-943000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-943000"}
	I0815 17:10:04.867060    6707 main.go:141] libmachine: (docker-flags-943000) DBG | 2024/08/15 17:10:04 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19452-977/.minikube/machines/docker-flags-943000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U fd29479f-b23d-4d7b-ad91-2216df62d905 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/docker-flags-943000/docker-flags-943000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/docker-flags-943000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/docker-flags-943000/tty,log=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/docker-flags-943000/console-ring -f kexec,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/docker-flags-943000/bzimage,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/docker-flags-943000
/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-943000"
	I0815 17:10:04.867078    6707 main.go:141] libmachine: (docker-flags-943000) DBG | 2024/08/15 17:10:04 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0815 17:10:04.869936    6707 main.go:141] libmachine: (docker-flags-943000) DBG | 2024/08/15 17:10:04 DEBUG: hyperkit: Pid is 6728
	I0815 17:10:04.870911    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Attempt 0
	I0815 17:10:04.870928    6707 main.go:141] libmachine: (docker-flags-943000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:10:04.870978    6707 main.go:141] libmachine: (docker-flags-943000) DBG | hyperkit pid from json: 6728
	I0815 17:10:04.871970    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Searching for a2:50:f7:1c:a1:76 in /var/db/dhcpd_leases ...
	I0815 17:10:04.872002    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:10:04.872016    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:10:04.872045    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:10:04.872052    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:10:04.872059    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:10:04.872066    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:10:04.872097    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:10:04.872115    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:10:04.872131    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:10:04.872146    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:10:04.872161    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:10:04.872171    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:10:04.872180    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:10:04.872188    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:10:04.872198    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:10:04.872209    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:10:04.872222    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:10:04.872237    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:10:04.872256    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:10:04.872267    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:10:04.872277    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:10:04.872294    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:10:04.872311    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:10:04.872328    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:10:04.877448    6707 main.go:141] libmachine: (docker-flags-943000) DBG | 2024/08/15 17:10:04 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0815 17:10:04.885439    6707 main.go:141] libmachine: (docker-flags-943000) DBG | 2024/08/15 17:10:04 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/docker-flags-943000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0815 17:10:04.886293    6707 main.go:141] libmachine: (docker-flags-943000) DBG | 2024/08/15 17:10:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0815 17:10:04.886325    6707 main.go:141] libmachine: (docker-flags-943000) DBG | 2024/08/15 17:10:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0815 17:10:04.886348    6707 main.go:141] libmachine: (docker-flags-943000) DBG | 2024/08/15 17:10:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0815 17:10:04.886362    6707 main.go:141] libmachine: (docker-flags-943000) DBG | 2024/08/15 17:10:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0815 17:10:05.264873    6707 main.go:141] libmachine: (docker-flags-943000) DBG | 2024/08/15 17:10:05 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0815 17:10:05.264889    6707 main.go:141] libmachine: (docker-flags-943000) DBG | 2024/08/15 17:10:05 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0815 17:10:05.379318    6707 main.go:141] libmachine: (docker-flags-943000) DBG | 2024/08/15 17:10:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0815 17:10:05.379338    6707 main.go:141] libmachine: (docker-flags-943000) DBG | 2024/08/15 17:10:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0815 17:10:05.379358    6707 main.go:141] libmachine: (docker-flags-943000) DBG | 2024/08/15 17:10:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0815 17:10:05.379372    6707 main.go:141] libmachine: (docker-flags-943000) DBG | 2024/08/15 17:10:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0815 17:10:05.380242    6707 main.go:141] libmachine: (docker-flags-943000) DBG | 2024/08/15 17:10:05 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0815 17:10:05.380253    6707 main.go:141] libmachine: (docker-flags-943000) DBG | 2024/08/15 17:10:05 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0815 17:10:06.874220    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Attempt 1
	I0815 17:10:06.874234    6707 main.go:141] libmachine: (docker-flags-943000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:10:06.874356    6707 main.go:141] libmachine: (docker-flags-943000) DBG | hyperkit pid from json: 6728
	I0815 17:10:06.875128    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Searching for a2:50:f7:1c:a1:76 in /var/db/dhcpd_leases ...
	I0815 17:10:06.875197    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:10:06.875212    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:10:06.875221    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:10:06.875227    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:10:06.875233    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:10:06.875238    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:10:06.875247    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:10:06.875253    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:10:06.875263    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:10:06.875271    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:10:06.875293    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:10:06.875315    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:10:06.875328    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:10:06.875345    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:10:06.875354    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:10:06.875378    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:10:06.875392    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:10:06.875400    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:10:06.875409    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:10:06.875419    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:10:06.875426    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:10:06.875433    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:10:06.875446    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:10:06.875455    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:10:08.877405    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Attempt 2
	I0815 17:10:08.877423    6707 main.go:141] libmachine: (docker-flags-943000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:10:08.877538    6707 main.go:141] libmachine: (docker-flags-943000) DBG | hyperkit pid from json: 6728
	I0815 17:10:08.878326    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Searching for a2:50:f7:1c:a1:76 in /var/db/dhcpd_leases ...
	I0815 17:10:08.878389    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:10:08.878400    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:10:08.878409    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:10:08.878415    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:10:08.878432    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:10:08.878448    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:10:08.878457    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:10:08.878466    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:10:08.878476    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:10:08.878486    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:10:08.878493    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:10:08.878500    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:10:08.878506    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:10:08.878526    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:10:08.878535    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:10:08.878542    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:10:08.878550    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:10:08.878564    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:10:08.878586    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:10:08.878594    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:10:08.878601    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:10:08.878609    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:10:08.878616    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:10:08.878627    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:10:09.072383    6707 main.go:141] libmachine: (docker-flags-943000) DBG | panic: runtime error: slice bounds out of range [54:2]
	I0815 17:10:09.091889    6707 client.go:171] duration metric: took 5.022675654s to LocalClient.Create
	I0815 17:10:09.091893    6707 main.go:141] libmachine: Wrapper Docker Machine process exiting due to closed plugin server (unexpected EOF)
	I0815 17:10:11.092264    6707 start.go:128] duration metric: took 7.075523403s to createHost
	I0815 17:10:11.092280    6707 start.go:83] releasing machines lock for "docker-flags-943000", held for 7.075671028s
	W0815 17:10:11.092295    6707 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: unexpected EOF
	I0815 17:10:11.092706    6707 main.go:141] libmachine: Found binary path at /Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit
	I0815 17:10:11.092757    6707 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 17:10:11.101752    6707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:54691
	I0815 17:10:11.102245    6707 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:10:11.102698    6707 main.go:141] libmachine: Using API Version  1
	I0815 17:10:11.102713    6707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:10:11.102967    6707 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:10:11.103339    6707 main.go:141] libmachine: Found binary path at /Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit
	I0815 17:10:11.103383    6707 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 17:10:11.112210    6707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:54693
	I0815 17:10:11.112657    6707 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:10:11.113033    6707 main.go:141] libmachine: Using API Version  1
	I0815 17:10:11.113048    6707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:10:11.113311    6707 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:10:11.113475    6707 main.go:141] libmachine: (docker-flags-943000) Calling .GetState
	I0815 17:10:11.113557    6707 main.go:141] libmachine: (docker-flags-943000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:10:11.113628    6707 main.go:141] libmachine: (docker-flags-943000) DBG | hyperkit pid from json: 6728
	I0815 17:10:11.114586    6707 main.go:141] libmachine: (docker-flags-943000) Calling .DriverName
	I0815 17:10:11.136011    6707 out.go:177] * Deleting "docker-flags-943000" in hyperkit ...
	I0815 17:10:11.178749    6707 main.go:141] libmachine: (docker-flags-943000) Calling .Remove
	I0815 17:10:11.178933    6707 main.go:141] libmachine: (docker-flags-943000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:10:11.178944    6707 main.go:141] libmachine: (docker-flags-943000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:10:11.179008    6707 main.go:141] libmachine: (docker-flags-943000) DBG | hyperkit pid from json: 6728
	I0815 17:10:11.179940    6707 main.go:141] libmachine: (docker-flags-943000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:10:11.180005    6707 main.go:141] libmachine: (docker-flags-943000) DBG | waiting for graceful shutdown
	I0815 17:10:12.180569    6707 main.go:141] libmachine: (docker-flags-943000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:10:12.180685    6707 main.go:141] libmachine: (docker-flags-943000) DBG | hyperkit pid from json: 6728
	I0815 17:10:12.181596    6707 main.go:141] libmachine: (docker-flags-943000) DBG | waiting for graceful shutdown
	I0815 17:10:13.181898    6707 main.go:141] libmachine: (docker-flags-943000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:10:13.181961    6707 main.go:141] libmachine: (docker-flags-943000) DBG | hyperkit pid from json: 6728
	I0815 17:10:13.183698    6707 main.go:141] libmachine: (docker-flags-943000) DBG | waiting for graceful shutdown
	I0815 17:10:14.184888    6707 main.go:141] libmachine: (docker-flags-943000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:10:14.184962    6707 main.go:141] libmachine: (docker-flags-943000) DBG | hyperkit pid from json: 6728
	I0815 17:10:14.185517    6707 main.go:141] libmachine: (docker-flags-943000) DBG | waiting for graceful shutdown
	I0815 17:10:15.187675    6707 main.go:141] libmachine: (docker-flags-943000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:10:15.187748    6707 main.go:141] libmachine: (docker-flags-943000) DBG | hyperkit pid from json: 6728
	I0815 17:10:15.188294    6707 main.go:141] libmachine: (docker-flags-943000) DBG | waiting for graceful shutdown
	I0815 17:10:16.189636    6707 main.go:141] libmachine: (docker-flags-943000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:10:16.189712    6707 main.go:141] libmachine: (docker-flags-943000) DBG | hyperkit pid from json: 6728
	I0815 17:10:16.190879    6707 main.go:141] libmachine: (docker-flags-943000) DBG | sending sigkill
	I0815 17:10:16.190892    6707 main.go:141] libmachine: (docker-flags-943000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	W0815 17:10:16.200165    6707 out.go:270] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: unexpected EOF
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: unexpected EOF
	I0815 17:10:16.200177    6707 start.go:729] Will try again in 5 seconds ...
	I0815 17:10:21.200436    6707 start.go:360] acquireMachinesLock for docker-flags-943000: {Name:mkac8034c8a60617271f2754a03dcaf74fc4fef7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 17:10:21.200602    6707 start.go:364] duration metric: took 148.077µs to acquireMachinesLock for "docker-flags-943000"
	I0815 17:10:21.200618    6707 start.go:93] Provisioning new machine with config: &{Name:docker-flags-943000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSH
Key: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-943000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 17:10:21.200712    6707 start.go:125] createHost starting for "" (driver="hyperkit")
	I0815 17:10:21.222142    6707 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0815 17:10:21.222211    6707 main.go:141] libmachine: Found binary path at /Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit
	I0815 17:10:21.222235    6707 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 17:10:21.231317    6707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:54695
	I0815 17:10:21.231678    6707 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:10:21.232030    6707 main.go:141] libmachine: Using API Version  1
	I0815 17:10:21.232052    6707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:10:21.232279    6707 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:10:21.232395    6707 main.go:141] libmachine: (docker-flags-943000) Calling .GetMachineName
	I0815 17:10:21.232482    6707 main.go:141] libmachine: (docker-flags-943000) Calling .DriverName
	I0815 17:10:21.232586    6707 start.go:159] libmachine.API.Create for "docker-flags-943000" (driver="hyperkit")
	I0815 17:10:21.232604    6707 client.go:168] LocalClient.Create starting
	I0815 17:10:21.232632    6707 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem
	I0815 17:10:21.232689    6707 main.go:141] libmachine: Decoding PEM data...
	I0815 17:10:21.232705    6707 main.go:141] libmachine: Parsing certificate...
	I0815 17:10:21.232754    6707 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem
	I0815 17:10:21.232812    6707 main.go:141] libmachine: Decoding PEM data...
	I0815 17:10:21.232824    6707 main.go:141] libmachine: Parsing certificate...
	I0815 17:10:21.232843    6707 main.go:141] libmachine: Running pre-create checks...
	I0815 17:10:21.232848    6707 main.go:141] libmachine: (docker-flags-943000) Calling .PreCreateCheck
	I0815 17:10:21.232936    6707 main.go:141] libmachine: (docker-flags-943000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:10:21.232961    6707 main.go:141] libmachine: (docker-flags-943000) Calling .GetConfigRaw
	I0815 17:10:21.233451    6707 main.go:141] libmachine: Creating machine...
	I0815 17:10:21.233461    6707 main.go:141] libmachine: (docker-flags-943000) Calling .Create
	I0815 17:10:21.233535    6707 main.go:141] libmachine: (docker-flags-943000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:10:21.233659    6707 main.go:141] libmachine: (docker-flags-943000) DBG | I0815 17:10:21.233528    6735 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19452-977/.minikube
	I0815 17:10:21.233724    6707 main.go:141] libmachine: (docker-flags-943000) Downloading /Users/jenkins/minikube-integration/19452-977/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19452-977/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0815 17:10:21.424567    6707 main.go:141] libmachine: (docker-flags-943000) DBG | I0815 17:10:21.424466    6735 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/docker-flags-943000/id_rsa...
	I0815 17:10:21.555792    6707 main.go:141] libmachine: (docker-flags-943000) DBG | I0815 17:10:21.555719    6735 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/docker-flags-943000/docker-flags-943000.rawdisk...
	I0815 17:10:21.555802    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Writing magic tar header
	I0815 17:10:21.555811    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Writing SSH key tar header
	I0815 17:10:21.556380    6707 main.go:141] libmachine: (docker-flags-943000) DBG | I0815 17:10:21.556343    6735 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19452-977/.minikube/machines/docker-flags-943000 ...
	I0815 17:10:21.932323    6707 main.go:141] libmachine: (docker-flags-943000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:10:21.932359    6707 main.go:141] libmachine: (docker-flags-943000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/docker-flags-943000/hyperkit.pid
	I0815 17:10:21.932397    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Using UUID 26bb3590-60d2-4d63-8e2d-02cc87930e64
	I0815 17:10:21.957546    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Generated MAC a2:b9:ee:5c:5:91
	I0815 17:10:21.957564    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-943000
	I0815 17:10:21.957598    6707 main.go:141] libmachine: (docker-flags-943000) DBG | 2024/08/15 17:10:21 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/docker-flags-943000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"26bb3590-60d2-4d63-8e2d-02cc87930e64", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/docker-flags-943000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/docker-flags-943000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/docker-flags-943000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process
:(*os.Process)(nil)}
	I0815 17:10:21.957636    6707 main.go:141] libmachine: (docker-flags-943000) DBG | 2024/08/15 17:10:21 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/docker-flags-943000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"26bb3590-60d2-4d63-8e2d-02cc87930e64", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/docker-flags-943000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/docker-flags-943000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/docker-flags-943000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process
:(*os.Process)(nil)}
	I0815 17:10:21.957700    6707 main.go:141] libmachine: (docker-flags-943000) DBG | 2024/08/15 17:10:21 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19452-977/.minikube/machines/docker-flags-943000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "26bb3590-60d2-4d63-8e2d-02cc87930e64", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/docker-flags-943000/docker-flags-943000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/docker-flags-943000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/docker-flags-943000/tty,log=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/docker-flags-943000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/docker-flags-943000/bzimage,/Users/jenkins/minikub
e-integration/19452-977/.minikube/machines/docker-flags-943000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-943000"}
	I0815 17:10:21.957738    6707 main.go:141] libmachine: (docker-flags-943000) DBG | 2024/08/15 17:10:21 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19452-977/.minikube/machines/docker-flags-943000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 26bb3590-60d2-4d63-8e2d-02cc87930e64 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/docker-flags-943000/docker-flags-943000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/docker-flags-943000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/docker-flags-943000/tty,log=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/docker-flags-943000/console-ring -f kexec,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/docker-flags-943000/bzimage,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/docker-flags-943000
/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-943000"
	I0815 17:10:21.957746    6707 main.go:141] libmachine: (docker-flags-943000) DBG | 2024/08/15 17:10:21 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0815 17:10:21.960652    6707 main.go:141] libmachine: (docker-flags-943000) DBG | 2024/08/15 17:10:21 DEBUG: hyperkit: Pid is 6736
	I0815 17:10:21.961251    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Attempt 0
	I0815 17:10:21.961265    6707 main.go:141] libmachine: (docker-flags-943000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:10:21.961307    6707 main.go:141] libmachine: (docker-flags-943000) DBG | hyperkit pid from json: 6736
	I0815 17:10:21.962218    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Searching for a2:b9:ee:5c:5:91 in /var/db/dhcpd_leases ...
	I0815 17:10:21.962268    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:10:21.962287    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:10:21.962304    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:10:21.962316    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:10:21.962324    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:10:21.962332    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:10:21.962342    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:10:21.962360    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:10:21.962375    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:10:21.962383    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:10:21.962397    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:10:21.962407    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:10:21.962416    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:10:21.962435    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:10:21.962450    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:10:21.962461    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:10:21.962469    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:10:21.962487    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:10:21.962501    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:10:21.962515    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:10:21.962528    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:10:21.962541    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:10:21.962553    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:10:21.962573    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:10:21.968173    6707 main.go:141] libmachine: (docker-flags-943000) DBG | 2024/08/15 17:10:21 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0815 17:10:21.976914    6707 main.go:141] libmachine: (docker-flags-943000) DBG | 2024/08/15 17:10:21 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/docker-flags-943000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0815 17:10:21.977735    6707 main.go:141] libmachine: (docker-flags-943000) DBG | 2024/08/15 17:10:21 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0815 17:10:21.977761    6707 main.go:141] libmachine: (docker-flags-943000) DBG | 2024/08/15 17:10:21 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0815 17:10:21.977774    6707 main.go:141] libmachine: (docker-flags-943000) DBG | 2024/08/15 17:10:21 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0815 17:10:21.977790    6707 main.go:141] libmachine: (docker-flags-943000) DBG | 2024/08/15 17:10:21 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0815 17:10:22.354854    6707 main.go:141] libmachine: (docker-flags-943000) DBG | 2024/08/15 17:10:22 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0815 17:10:22.354871    6707 main.go:141] libmachine: (docker-flags-943000) DBG | 2024/08/15 17:10:22 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0815 17:10:22.469513    6707 main.go:141] libmachine: (docker-flags-943000) DBG | 2024/08/15 17:10:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0815 17:10:22.469530    6707 main.go:141] libmachine: (docker-flags-943000) DBG | 2024/08/15 17:10:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0815 17:10:22.469543    6707 main.go:141] libmachine: (docker-flags-943000) DBG | 2024/08/15 17:10:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0815 17:10:22.469557    6707 main.go:141] libmachine: (docker-flags-943000) DBG | 2024/08/15 17:10:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0815 17:10:22.470422    6707 main.go:141] libmachine: (docker-flags-943000) DBG | 2024/08/15 17:10:22 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0815 17:10:22.470435    6707 main.go:141] libmachine: (docker-flags-943000) DBG | 2024/08/15 17:10:22 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0815 17:10:23.963553    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Attempt 1
	I0815 17:10:23.963566    6707 main.go:141] libmachine: (docker-flags-943000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:10:23.963634    6707 main.go:141] libmachine: (docker-flags-943000) DBG | hyperkit pid from json: 6736
	I0815 17:10:23.964439    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Searching for a2:b9:ee:5c:5:91 in /var/db/dhcpd_leases ...
	I0815 17:10:23.964452    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:10:23.964459    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:10:23.964468    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:10:23.964473    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:10:23.964493    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:10:23.964507    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:10:23.964517    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:10:23.964525    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:10:23.964544    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:10:23.964557    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:10:23.964568    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:10:23.964578    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:10:23.964588    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:10:23.964597    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:10:23.964611    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:10:23.964620    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:10:23.964627    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:10:23.964635    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:10:23.964643    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:10:23.964651    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:10:23.964658    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:10:23.964671    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:10:23.964683    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:10:23.964695    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:10:25.966672    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Attempt 2
	I0815 17:10:25.966689    6707 main.go:141] libmachine: (docker-flags-943000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:10:25.966788    6707 main.go:141] libmachine: (docker-flags-943000) DBG | hyperkit pid from json: 6736
	I0815 17:10:25.967662    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Searching for a2:b9:ee:5c:5:91 in /var/db/dhcpd_leases ...
	I0815 17:10:25.967734    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:10:25.967743    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:10:25.967756    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:10:25.967765    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:10:25.967772    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:10:25.967814    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:10:25.967821    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:10:25.967828    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:10:25.967848    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:10:25.967886    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:10:25.967902    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:10:25.967913    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:10:25.967921    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:10:25.967928    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:10:25.967936    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:10:25.967943    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:10:25.967953    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:10:25.967960    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:10:25.967968    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:10:25.967979    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:10:25.967985    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:10:25.967992    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:10:25.968000    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:10:25.968009    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:10:27.852065    6707 main.go:141] libmachine: (docker-flags-943000) DBG | 2024/08/15 17:10:27 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0815 17:10:27.852144    6707 main.go:141] libmachine: (docker-flags-943000) DBG | 2024/08/15 17:10:27 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0815 17:10:27.852155    6707 main.go:141] libmachine: (docker-flags-943000) DBG | 2024/08/15 17:10:27 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0815 17:10:27.872032    6707 main.go:141] libmachine: (docker-flags-943000) DBG | 2024/08/15 17:10:27 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0815 17:10:27.969349    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Attempt 3
	I0815 17:10:27.969365    6707 main.go:141] libmachine: (docker-flags-943000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:10:27.969465    6707 main.go:141] libmachine: (docker-flags-943000) DBG | hyperkit pid from json: 6736
	I0815 17:10:27.970246    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Searching for a2:b9:ee:5c:5:91 in /var/db/dhcpd_leases ...
	I0815 17:10:27.970307    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:10:27.970318    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:10:27.970326    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:10:27.970349    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:10:27.970375    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:10:27.970388    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:10:27.970396    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:10:27.970404    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:10:27.970412    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:10:27.970421    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:10:27.970432    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:10:27.970439    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:10:27.970447    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:10:27.970459    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:10:27.970480    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:10:27.970493    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:10:27.970504    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:10:27.970532    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:10:27.970546    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:10:27.970554    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:10:27.970572    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:10:27.970579    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:10:27.970585    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:10:27.970594    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:10:29.972300    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Attempt 4
	I0815 17:10:29.972318    6707 main.go:141] libmachine: (docker-flags-943000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:10:29.972414    6707 main.go:141] libmachine: (docker-flags-943000) DBG | hyperkit pid from json: 6736
	I0815 17:10:29.973178    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Searching for a2:b9:ee:5c:5:91 in /var/db/dhcpd_leases ...
	I0815 17:10:29.973242    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:10:29.973251    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:10:29.973260    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:10:29.973266    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:10:29.973272    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:10:29.973278    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:10:29.973284    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:10:29.973291    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:10:29.973311    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:10:29.973318    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:10:29.973325    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:10:29.973333    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:10:29.973342    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:10:29.973350    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:10:29.973357    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:10:29.973369    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:10:29.973378    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:10:29.973384    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:10:29.973391    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:10:29.973398    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:10:29.973405    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:10:29.973412    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:10:29.973420    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:10:29.973428    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:10:31.973845    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Attempt 5
	I0815 17:10:31.973859    6707 main.go:141] libmachine: (docker-flags-943000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:10:31.973924    6707 main.go:141] libmachine: (docker-flags-943000) DBG | hyperkit pid from json: 6736
	I0815 17:10:31.974793    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Searching for a2:b9:ee:5c:5:91 in /var/db/dhcpd_leases ...
	I0815 17:10:31.974835    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:10:31.974843    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:10:31.974855    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:10:31.974867    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:10:31.974873    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:10:31.974879    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:10:31.974885    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:10:31.974904    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:10:31.974923    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:10:31.974937    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:10:31.974953    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:10:31.974965    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:10:31.974974    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:10:31.974985    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:10:31.974997    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:10:31.975006    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:10:31.975012    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:10:31.975020    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:10:31.975031    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:10:31.975039    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:10:31.975055    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:10:31.975067    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:10:31.975086    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:10:31.975098    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:10:33.976103    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Attempt 6
	I0815 17:10:33.976118    6707 main.go:141] libmachine: (docker-flags-943000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:10:33.976186    6707 main.go:141] libmachine: (docker-flags-943000) DBG | hyperkit pid from json: 6736
	I0815 17:10:33.976948    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Searching for a2:b9:ee:5c:5:91 in /var/db/dhcpd_leases ...
	I0815 17:10:33.977014    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:10:33.977026    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:10:33.977034    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:10:33.977043    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:10:33.977051    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:10:33.977063    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:10:33.977084    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:10:33.977097    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:10:33.977105    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:10:33.977113    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:10:33.977128    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:10:33.977141    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:10:33.977159    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:10:33.977181    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:10:33.977193    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:10:33.977202    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:10:33.977214    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:10:33.977224    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:10:33.977233    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:10:33.977241    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:10:33.977255    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:10:33.977268    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:10:33.977282    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:10:33.977292    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:10:35.977849    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Attempt 7
	I0815 17:10:35.977862    6707 main.go:141] libmachine: (docker-flags-943000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:10:35.977948    6707 main.go:141] libmachine: (docker-flags-943000) DBG | hyperkit pid from json: 6736
	I0815 17:10:35.978729    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Searching for a2:b9:ee:5c:5:91 in /var/db/dhcpd_leases ...
	I0815 17:10:35.978754    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:10:35.978769    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:10:35.978791    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:10:35.978808    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:10:35.978833    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:10:35.978845    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:10:35.978852    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:10:35.978860    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:10:35.978872    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:10:35.978880    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:10:35.978905    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:10:35.978916    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:10:35.978924    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:10:35.978933    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:10:35.978941    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:10:35.978947    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:10:35.978963    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:10:35.978976    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:10:35.978985    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:10:35.978993    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:10:35.979004    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:10:35.979013    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:10:35.979028    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:10:35.979036    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:10:37.981027    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Attempt 8
	I0815 17:10:37.981042    6707 main.go:141] libmachine: (docker-flags-943000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:10:37.981076    6707 main.go:141] libmachine: (docker-flags-943000) DBG | hyperkit pid from json: 6736
	I0815 17:10:37.981834    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Searching for a2:b9:ee:5c:5:91 in /var/db/dhcpd_leases ...
	I0815 17:10:37.981896    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:10:37.981910    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:10:37.981924    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:10:37.981939    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:10:37.981946    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:10:37.981955    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:10:37.981968    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:10:37.981976    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:10:37.981985    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:10:37.981992    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:10:37.982000    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:10:37.982015    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:10:37.982028    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:10:37.982048    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:10:37.982057    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:10:37.982065    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:10:37.982073    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:10:37.982085    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:10:37.982094    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:10:37.982101    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:10:37.982119    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:10:37.982131    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:10:37.982143    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:10:37.982156    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:10:39.984156    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Attempt 9
	I0815 17:10:39.984170    6707 main.go:141] libmachine: (docker-flags-943000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:10:39.984224    6707 main.go:141] libmachine: (docker-flags-943000) DBG | hyperkit pid from json: 6736
	I0815 17:10:39.985046    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Searching for a2:b9:ee:5c:5:91 in /var/db/dhcpd_leases ...
	I0815 17:10:39.985089    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:10:39.985100    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:10:39.985118    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:10:39.985127    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:10:39.985137    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:10:39.985146    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:10:39.985155    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:10:39.985163    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:10:39.985170    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:10:39.985177    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:10:39.985185    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:10:39.985192    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:10:39.985198    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:10:39.985206    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:10:39.985218    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:10:39.985226    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:10:39.985233    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:10:39.985241    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:10:39.985248    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:10:39.985256    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:10:39.985270    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:10:39.985282    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:10:39.985296    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:10:39.985310    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:10:41.986032    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Attempt 10
	I0815 17:10:41.986056    6707 main.go:141] libmachine: (docker-flags-943000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:10:41.986098    6707 main.go:141] libmachine: (docker-flags-943000) DBG | hyperkit pid from json: 6736
	I0815 17:10:41.986871    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Searching for a2:b9:ee:5c:5:91 in /var/db/dhcpd_leases ...
	I0815 17:10:41.986926    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:10:41.986941    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:10:41.986962    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:10:41.986987    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:10:41.987000    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:10:41.987006    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:10:41.987012    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:10:41.987022    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:10:41.987032    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:10:41.987044    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:10:41.987051    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:10:41.987057    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:10:41.987065    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:10:41.987076    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:10:41.987090    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:10:41.987104    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:10:41.987115    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:10:41.987123    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:10:41.987132    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:10:41.987144    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:10:41.987156    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:10:41.987166    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:10:41.987174    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:10:41.987183    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:10:43.989124    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Attempt 11
	I0815 17:10:43.989137    6707 main.go:141] libmachine: (docker-flags-943000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:10:43.989204    6707 main.go:141] libmachine: (docker-flags-943000) DBG | hyperkit pid from json: 6736
	I0815 17:10:43.989980    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Searching for a2:b9:ee:5c:5:91 in /var/db/dhcpd_leases ...
	I0815 17:10:43.990006    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:10:43.990014    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:10:43.990033    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:10:43.990039    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:10:43.990046    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:10:43.990053    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:10:43.990061    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:10:43.990069    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:10:43.990077    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:10:43.990083    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:10:43.990105    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:10:43.990122    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:10:43.990132    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:10:43.990140    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:10:43.990148    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:10:43.990155    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:10:43.990166    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:10:43.990176    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:10:43.990191    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:10:43.990204    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:10:43.990214    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:10:43.990229    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:10:43.990244    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:10:43.990256    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:10:45.990959    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Attempt 12
	I0815 17:10:45.990972    6707 main.go:141] libmachine: (docker-flags-943000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:10:45.991070    6707 main.go:141] libmachine: (docker-flags-943000) DBG | hyperkit pid from json: 6736
	I0815 17:10:45.991826    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Searching for a2:b9:ee:5c:5:91 in /var/db/dhcpd_leases ...
	I0815 17:10:45.991859    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:10:45.991868    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:10:45.991887    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:10:45.991895    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:10:45.991904    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:10:45.991914    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:10:45.991922    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:10:45.991931    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:10:45.991945    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:10:45.991966    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:10:45.991981    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:10:45.991993    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:10:45.992001    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:10:45.992010    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:10:45.992017    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:10:45.992025    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:10:45.992036    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:10:45.992043    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:10:45.992050    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:10:45.992059    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:10:45.992072    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:10:45.992080    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:10:45.992088    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:10:45.992096    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:10:47.992171    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Attempt 13
	I0815 17:10:47.992195    6707 main.go:141] libmachine: (docker-flags-943000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:10:47.992247    6707 main.go:141] libmachine: (docker-flags-943000) DBG | hyperkit pid from json: 6736
	I0815 17:10:47.993026    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Searching for a2:b9:ee:5c:5:91 in /var/db/dhcpd_leases ...
	I0815 17:10:47.993086    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:10:47.993101    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:10:47.993115    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:10:47.993124    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:10:47.993131    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:10:47.993140    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:10:47.993147    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:10:47.993175    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:10:47.993189    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:10:47.993208    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:10:47.993216    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:10:47.993224    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:10:47.993233    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:10:47.993247    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:10:47.993260    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:10:47.993273    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:10:47.993282    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:10:47.993300    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:10:47.993312    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:10:47.993320    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:10:47.993337    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:10:47.993352    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:10:47.993364    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:10:47.993380    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:10:49.993761    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Attempt 14
	I0815 17:10:49.993775    6707 main.go:141] libmachine: (docker-flags-943000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:10:49.993784    6707 main.go:141] libmachine: (docker-flags-943000) DBG | hyperkit pid from json: 6736
	I0815 17:10:49.994542    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Searching for a2:b9:ee:5c:5:91 in /var/db/dhcpd_leases ...
	I0815 17:10:49.994597    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:10:49.994609    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:10:49.994629    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:10:49.994645    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:10:49.994653    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:10:49.994661    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:10:49.994668    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:10:49.994677    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:10:49.994685    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:10:49.994694    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:10:49.994710    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:10:49.994725    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:10:49.994733    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:10:49.994739    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:10:49.994745    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:10:49.994754    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:10:49.994760    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:10:49.994769    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:10:49.994776    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:10:49.994782    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:10:49.994789    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:10:49.994797    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:10:49.994803    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:10:49.994809    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:10:51.995311    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Attempt 15
	I0815 17:10:51.995324    6707 main.go:141] libmachine: (docker-flags-943000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:10:51.995399    6707 main.go:141] libmachine: (docker-flags-943000) DBG | hyperkit pid from json: 6736
	I0815 17:10:51.996187    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Searching for a2:b9:ee:5c:5:91 in /var/db/dhcpd_leases ...
	I0815 17:10:51.996212    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:10:51.996225    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:10:51.996235    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:10:51.996241    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:10:51.996251    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:10:51.996260    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:10:51.996267    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:10:51.996276    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:10:51.996283    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:10:51.996291    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:10:51.996307    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:10:51.996320    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:10:51.996337    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:10:51.996346    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:10:51.996353    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:10:51.996361    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:10:51.996395    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:10:51.996406    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:10:51.996414    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:10:51.996419    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:10:51.996436    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:10:51.996446    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:10:51.996459    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:10:51.996469    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:10:53.998430    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Attempt 16
	I0815 17:10:53.998455    6707 main.go:141] libmachine: (docker-flags-943000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:10:53.998506    6707 main.go:141] libmachine: (docker-flags-943000) DBG | hyperkit pid from json: 6736
	I0815 17:10:53.999270    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Searching for a2:b9:ee:5c:5:91 in /var/db/dhcpd_leases ...
	I0815 17:10:53.999333    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:10:53.999341    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:10:53.999365    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:10:53.999376    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:10:53.999383    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:10:53.999389    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:10:53.999396    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:10:53.999402    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:10:53.999421    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:10:53.999434    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:10:53.999458    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:10:53.999467    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:10:53.999481    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:10:53.999494    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:10:53.999511    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:10:53.999522    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:10:53.999531    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:10:53.999537    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:10:53.999548    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:10:53.999558    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:10:53.999566    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:10:53.999572    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:10:53.999582    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:10:53.999591    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:10:56.000861    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Attempt 17
	I0815 17:10:56.000875    6707 main.go:141] libmachine: (docker-flags-943000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:10:56.000953    6707 main.go:141] libmachine: (docker-flags-943000) DBG | hyperkit pid from json: 6736
	I0815 17:10:56.001696    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Searching for a2:b9:ee:5c:5:91 in /var/db/dhcpd_leases ...
	I0815 17:10:56.001746    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:10:56.001761    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:10:56.001787    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:10:56.001803    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:10:56.001820    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:10:56.001842    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:10:56.001855    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:10:56.001864    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:10:56.001871    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:10:56.001880    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:10:56.001888    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:10:56.001895    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:10:56.001908    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:10:56.001918    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:10:56.001935    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:10:56.001946    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:10:56.001957    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:10:56.001964    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:10:56.001972    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:10:56.001980    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:10:56.001986    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:10:56.001994    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:10:56.002001    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:10:56.002008    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:10:58.003543    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Attempt 18
	I0815 17:10:58.003559    6707 main.go:141] libmachine: (docker-flags-943000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:10:58.003614    6707 main.go:141] libmachine: (docker-flags-943000) DBG | hyperkit pid from json: 6736
	I0815 17:10:58.004377    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Searching for a2:b9:ee:5c:5:91 in /var/db/dhcpd_leases ...
	I0815 17:10:58.004432    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:10:58.004441    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:10:58.004449    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:10:58.004457    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:10:58.004465    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:10:58.004471    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:10:58.004477    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:10:58.004484    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:10:58.004493    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:10:58.004500    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:10:58.004507    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:10:58.004514    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:10:58.004536    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:10:58.004552    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:10:58.004564    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:10:58.004579    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:10:58.004588    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:10:58.004596    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:10:58.004604    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:10:58.004611    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:10:58.004619    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:10:58.004630    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:10:58.004641    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:10:58.004657    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:11:00.005320    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Attempt 19
	I0815 17:11:00.005336    6707 main.go:141] libmachine: (docker-flags-943000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:11:00.005419    6707 main.go:141] libmachine: (docker-flags-943000) DBG | hyperkit pid from json: 6736
	I0815 17:11:00.006177    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Searching for a2:b9:ee:5c:5:91 in /var/db/dhcpd_leases ...
	I0815 17:11:00.006230    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:11:00.006240    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:11:00.006249    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:11:00.006256    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:11:00.006262    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:11:00.006269    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:11:00.006281    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:11:00.006293    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:11:00.006301    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:11:00.006310    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:11:00.006319    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:11:00.006328    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:11:00.006335    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:11:00.006343    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:11:00.006370    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:11:00.006384    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:11:00.006396    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:11:00.006403    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:11:00.006409    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:11:00.006417    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:11:00.006424    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:11:00.006432    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:11:00.006456    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:11:00.006485    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:11:02.008444    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Attempt 20
	I0815 17:11:02.008456    6707 main.go:141] libmachine: (docker-flags-943000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:11:02.008550    6707 main.go:141] libmachine: (docker-flags-943000) DBG | hyperkit pid from json: 6736
	I0815 17:11:02.009309    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Searching for a2:b9:ee:5c:5:91 in /var/db/dhcpd_leases ...
	I0815 17:11:02.009364    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:11:02.009372    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:11:02.009380    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:11:02.009385    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:11:02.009392    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:11:02.009400    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:11:02.009413    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:11:02.009422    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:11:02.009428    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:11:02.009435    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:11:02.009442    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:11:02.009449    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:11:02.009468    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:11:02.009479    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:11:02.009488    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:11:02.009496    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:11:02.009503    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:11:02.009509    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:11:02.009518    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:11:02.009528    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:11:02.009537    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:11:02.009544    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:11:02.009551    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:11:02.009564    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:11:04.011568    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Attempt 21
	I0815 17:11:04.011583    6707 main.go:141] libmachine: (docker-flags-943000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:11:04.011661    6707 main.go:141] libmachine: (docker-flags-943000) DBG | hyperkit pid from json: 6736
	I0815 17:11:04.012428    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Searching for a2:b9:ee:5c:5:91 in /var/db/dhcpd_leases ...
	I0815 17:11:04.012495    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:11:04.012506    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:11:04.012515    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:11:04.012520    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:11:04.012526    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:11:04.012532    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:11:04.012538    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:11:04.012543    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:11:04.012570    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:11:04.012584    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:11:04.012593    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:11:04.012603    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:11:04.012610    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:11:04.012618    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:11:04.012636    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:11:04.012650    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:11:04.012665    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:11:04.012678    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:11:04.012695    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:11:04.012706    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:11:04.012719    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:11:04.012728    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:11:04.012737    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:11:04.012747    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:11:06.012853    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Attempt 22
	I0815 17:11:06.012875    6707 main.go:141] libmachine: (docker-flags-943000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:11:06.012944    6707 main.go:141] libmachine: (docker-flags-943000) DBG | hyperkit pid from json: 6736
	I0815 17:11:06.013686    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Searching for a2:b9:ee:5c:5:91 in /var/db/dhcpd_leases ...
	I0815 17:11:06.013746    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:11:06.013757    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:11:06.013768    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:11:06.013774    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:11:06.013782    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:11:06.013792    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:11:06.013798    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:11:06.013804    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:11:06.013811    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:11:06.013817    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:11:06.013833    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:11:06.013841    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:11:06.013848    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:11:06.013854    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:11:06.013867    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:11:06.013881    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:11:06.013890    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:11:06.013898    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:11:06.013915    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:11:06.013928    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:11:06.013936    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:11:06.013944    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:11:06.013959    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:11:06.013974    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:11:08.014599    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Attempt 23
	I0815 17:11:08.014629    6707 main.go:141] libmachine: (docker-flags-943000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:11:08.014705    6707 main.go:141] libmachine: (docker-flags-943000) DBG | hyperkit pid from json: 6736
	I0815 17:11:08.015480    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Searching for a2:b9:ee:5c:5:91 in /var/db/dhcpd_leases ...
	I0815 17:11:08.015498    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:11:08.015545    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:11:08.015559    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:11:08.015568    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:11:08.015574    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:11:08.015588    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:11:08.015603    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:11:08.015613    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:11:08.015621    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:11:08.015629    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:11:08.015635    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:11:08.015642    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:11:08.015649    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:11:08.015662    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:11:08.015674    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:11:08.015692    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:11:08.015706    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:11:08.015714    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:11:08.015724    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:11:08.015732    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:11:08.015740    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:11:08.015747    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:11:08.015754    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:11:08.015761    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:11:10.017681    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Attempt 24
	I0815 17:11:10.017697    6707 main.go:141] libmachine: (docker-flags-943000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:11:10.017766    6707 main.go:141] libmachine: (docker-flags-943000) DBG | hyperkit pid from json: 6736
	I0815 17:11:10.018538    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Searching for a2:b9:ee:5c:5:91 in /var/db/dhcpd_leases ...
	I0815 17:11:10.018593    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:11:10.018606    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:11:10.018628    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:11:10.018634    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:11:10.018641    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:11:10.018649    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:11:10.018660    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:11:10.018700    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:11:10.018715    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:11:10.018727    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:11:10.018735    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:11:10.018743    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:11:10.018750    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:11:10.018757    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:11:10.018763    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:11:10.018780    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:11:10.018793    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:11:10.018801    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:11:10.018809    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:11:10.018817    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:11:10.018824    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:11:10.018833    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:11:10.018841    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:11:10.018850    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:11:12.019807    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Attempt 25
	I0815 17:11:12.019819    6707 main.go:141] libmachine: (docker-flags-943000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:11:12.019882    6707 main.go:141] libmachine: (docker-flags-943000) DBG | hyperkit pid from json: 6736
	I0815 17:11:12.020646    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Searching for a2:b9:ee:5c:5:91 in /var/db/dhcpd_leases ...
	I0815 17:11:12.020705    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:11:12.020715    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:11:12.020723    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:11:12.020730    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:11:12.020741    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:11:12.020751    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:11:12.020757    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:11:12.020777    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:11:12.020788    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:11:12.020806    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:11:12.020816    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:11:12.020824    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:11:12.020832    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:11:12.020839    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:11:12.020847    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:11:12.020864    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:11:12.020876    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:11:12.020887    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:11:12.020899    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:11:12.020908    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:11:12.020918    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:11:12.020929    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:11:12.020936    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:11:12.020942    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:11:14.022411    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Attempt 26
	I0815 17:11:14.022426    6707 main.go:141] libmachine: (docker-flags-943000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:11:14.022506    6707 main.go:141] libmachine: (docker-flags-943000) DBG | hyperkit pid from json: 6736
	I0815 17:11:14.023261    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Searching for a2:b9:ee:5c:5:91 in /var/db/dhcpd_leases ...
	I0815 17:11:14.023323    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:11:14.023332    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:11:14.023340    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:11:14.023350    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:11:14.023357    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:11:14.023363    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:11:14.023379    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:11:14.023389    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:11:14.023398    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:11:14.023404    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:11:14.023412    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:11:14.023417    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:11:14.023424    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:11:14.023436    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:11:14.023444    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:11:14.023454    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:11:14.023462    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:11:14.023475    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:11:14.023484    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:11:14.023493    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:11:14.023501    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:11:14.023508    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:11:14.023516    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:11:14.023525    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:11:16.023810    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Attempt 27
	I0815 17:11:16.023824    6707 main.go:141] libmachine: (docker-flags-943000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:11:16.023883    6707 main.go:141] libmachine: (docker-flags-943000) DBG | hyperkit pid from json: 6736
	I0815 17:11:16.024655    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Searching for a2:b9:ee:5c:5:91 in /var/db/dhcpd_leases ...
	I0815 17:11:16.024721    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:11:16.024736    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:11:16.024753    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:11:16.024773    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:11:16.024782    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:11:16.024805    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:11:16.024816    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:11:16.024825    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:11:16.024832    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:11:16.024838    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:11:16.024846    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:11:16.024853    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:11:16.024861    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:11:16.024868    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:11:16.024876    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:11:16.024882    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:11:16.024889    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:11:16.024895    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:11:16.024903    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:11:16.024912    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:11:16.024919    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:11:16.024942    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:11:16.024951    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:11:16.024971    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:11:18.026144    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Attempt 28
	I0815 17:11:18.026168    6707 main.go:141] libmachine: (docker-flags-943000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:11:18.026237    6707 main.go:141] libmachine: (docker-flags-943000) DBG | hyperkit pid from json: 6736
	I0815 17:11:18.026996    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Searching for a2:b9:ee:5c:5:91 in /var/db/dhcpd_leases ...
	I0815 17:11:18.027053    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:11:18.027071    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:11:18.027082    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:11:18.027089    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:11:18.027098    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:11:18.027111    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:11:18.027119    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:11:18.027125    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:11:18.027133    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:11:18.027140    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:11:18.027150    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:11:18.027163    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:11:18.027170    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:11:18.027176    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:11:18.027182    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:11:18.027188    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:11:18.027195    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:11:18.027204    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:11:18.027212    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:11:18.027218    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:11:18.027224    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:11:18.027232    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:11:18.027246    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:11:18.027254    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:11:20.027730    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Attempt 29
	I0815 17:11:20.027745    6707 main.go:141] libmachine: (docker-flags-943000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:11:20.027806    6707 main.go:141] libmachine: (docker-flags-943000) DBG | hyperkit pid from json: 6736
	I0815 17:11:20.028560    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Searching for a2:b9:ee:5c:5:91 in /var/db/dhcpd_leases ...
	I0815 17:11:20.028622    6707 main.go:141] libmachine: (docker-flags-943000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:11:20.028636    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:11:20.028645    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:11:20.028652    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:11:20.028667    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:11:20.028683    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:11:20.028692    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:11:20.028698    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:11:20.028710    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:11:20.028718    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:11:20.028726    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:11:20.028734    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:11:20.028741    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:11:20.028753    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:11:20.028764    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:11:20.028772    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:11:20.028782    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:11:20.028794    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:11:20.028808    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:11:20.028816    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:11:20.028823    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:11:20.028843    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:11:20.028863    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:11:20.028876    6707 main.go:141] libmachine: (docker-flags-943000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:11:22.028947    6707 client.go:171] duration metric: took 1m0.795147864s to LocalClient.Create
	I0815 17:11:24.031071    6707 start.go:128] duration metric: took 1m2.829123133s to createHost
	I0815 17:11:24.031085    6707 start.go:83] releasing machines lock for "docker-flags-943000", held for 1m2.829249203s
	W0815 17:11:24.031182    6707 out.go:270] * Failed to start hyperkit VM. Running "minikube delete -p docker-flags-943000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for a2:b9:ee:5c:5:91
	* Failed to start hyperkit VM. Running "minikube delete -p docker-flags-943000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for a2:b9:ee:5c:5:91
	I0815 17:11:24.052893    6707 out.go:201] 
	W0815 17:11:24.079703    6707 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for a2:b9:ee:5c:5:91
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for a2:b9:ee:5c:5:91
	W0815 17:11:24.079715    6707 out.go:270] * 
	* 
	W0815 17:11:24.080397    6707 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 17:11:24.140204    6707 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-amd64 start -p docker-flags-943000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-943000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-943000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 50 (164.324125ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node docker-flags-943000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-943000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 50
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-943000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-943000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 50 (159.574229ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node docker-flags-943000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-943000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 50
docker_test.go:73: expected "out/minikube-darwin-amd64 -p docker-flags-943000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "\n\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-08-15 17:11:24.546456 -0700 PDT m=+3991.425562130
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-943000 -n docker-flags-943000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-943000 -n docker-flags-943000: exit status 7 (79.372027ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 17:11:24.624171    6753 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0815 17:11:24.624194    6753 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-943000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "docker-flags-943000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-943000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-943000: (5.233547719s)
--- FAIL: TestDockerFlags (143.28s)

                                                
                                    
x
+
TestForceSystemdFlag (147.17s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-182000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-flag-182000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit : exit status 80 (2m21.590959509s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-182000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-977/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-977/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "force-systemd-flag-182000" primary control-plane node in "force-systemd-flag-182000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "force-systemd-flag-182000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 17:03:05.272733    6445 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:03:05.272995    6445 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:03:05.273001    6445 out.go:358] Setting ErrFile to fd 2...
	I0815 17:03:05.273005    6445 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:03:05.273186    6445 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-977/.minikube/bin
	I0815 17:03:05.274598    6445 out.go:352] Setting JSON to false
	I0815 17:03:05.297387    6445 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":3756,"bootTime":1723762829,"procs":436,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0815 17:03:05.297481    6445 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 17:03:05.318047    6445 out.go:177] * [force-systemd-flag-182000] minikube v1.33.1 on Darwin 14.6.1
	I0815 17:03:05.362566    6445 out.go:177]   - MINIKUBE_LOCATION=19452
	I0815 17:03:05.362581    6445 notify.go:220] Checking for updates...
	I0815 17:03:05.404469    6445 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19452-977/kubeconfig
	I0815 17:03:05.427690    6445 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0815 17:03:05.448737    6445 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 17:03:05.469507    6445 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-977/.minikube
	I0815 17:03:05.490705    6445 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 17:03:05.512621    6445 config.go:182] Loaded profile config "NoKubernetes-614000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I0815 17:03:05.512812    6445 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 17:03:05.542601    6445 out.go:177] * Using the hyperkit driver based on user configuration
	I0815 17:03:05.584597    6445 start.go:297] selected driver: hyperkit
	I0815 17:03:05.584625    6445 start.go:901] validating driver "hyperkit" against <nil>
	I0815 17:03:05.584647    6445 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 17:03:05.589108    6445 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:03:05.589221    6445 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19452-977/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0815 17:03:05.597705    6445 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0815 17:03:05.601556    6445 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 17:03:05.601577    6445 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0815 17:03:05.601606    6445 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 17:03:05.601792    6445 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0815 17:03:05.601816    6445 cni.go:84] Creating CNI manager for ""
	I0815 17:03:05.601844    6445 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0815 17:03:05.601854    6445 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0815 17:03:05.601916    6445 start.go:340] cluster config:
	{Name:force-systemd-flag-182000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-182000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clus
ter.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:03:05.602013    6445 iso.go:125] acquiring lock: {Name:mkb93fc66b60be15dffa9eb2999a4be8d8d4d218 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:03:05.644456    6445 out.go:177] * Starting "force-systemd-flag-182000" primary control-plane node in "force-systemd-flag-182000" cluster
	I0815 17:03:05.665592    6445 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 17:03:05.665663    6445 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19452-977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0815 17:03:05.665694    6445 cache.go:56] Caching tarball of preloaded images
	I0815 17:03:05.665940    6445 preload.go:172] Found /Users/jenkins/minikube-integration/19452-977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0815 17:03:05.665957    6445 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 17:03:05.666106    6445 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/force-systemd-flag-182000/config.json ...
	I0815 17:03:05.666142    6445 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/force-systemd-flag-182000/config.json: {Name:mk266adec7f6945884763b93f63f697cd4051678 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:03:05.666780    6445 start.go:360] acquireMachinesLock for force-systemd-flag-182000: {Name:mkac8034c8a60617271f2754a03dcaf74fc4fef7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 17:03:05.666905    6445 start.go:364] duration metric: took 90.732µs to acquireMachinesLock for "force-systemd-flag-182000"
	I0815 17:03:05.666963    6445 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-182000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-182000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 17:03:05.667037    6445 start.go:125] createHost starting for "" (driver="hyperkit")
	I0815 17:03:05.688336    6445 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0815 17:03:05.688667    6445 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 17:03:05.688746    6445 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 17:03:05.698530    6445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:54491
	I0815 17:03:05.698891    6445 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:03:05.699290    6445 main.go:141] libmachine: Using API Version  1
	I0815 17:03:05.699298    6445 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:03:05.699569    6445 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:03:05.699708    6445 main.go:141] libmachine: (force-systemd-flag-182000) Calling .GetMachineName
	I0815 17:03:05.699812    6445 main.go:141] libmachine: (force-systemd-flag-182000) Calling .DriverName
	I0815 17:03:05.699927    6445 start.go:159] libmachine.API.Create for "force-systemd-flag-182000" (driver="hyperkit")
	I0815 17:03:05.699953    6445 client.go:168] LocalClient.Create starting
	I0815 17:03:05.699987    6445 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem
	I0815 17:03:05.700042    6445 main.go:141] libmachine: Decoding PEM data...
	I0815 17:03:05.700059    6445 main.go:141] libmachine: Parsing certificate...
	I0815 17:03:05.700120    6445 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem
	I0815 17:03:05.700157    6445 main.go:141] libmachine: Decoding PEM data...
	I0815 17:03:05.700169    6445 main.go:141] libmachine: Parsing certificate...
	I0815 17:03:05.700181    6445 main.go:141] libmachine: Running pre-create checks...
	I0815 17:03:05.700191    6445 main.go:141] libmachine: (force-systemd-flag-182000) Calling .PreCreateCheck
	I0815 17:03:05.700273    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:03:05.700455    6445 main.go:141] libmachine: (force-systemd-flag-182000) Calling .GetConfigRaw
	I0815 17:03:05.709602    6445 main.go:141] libmachine: Creating machine...
	I0815 17:03:05.709615    6445 main.go:141] libmachine: (force-systemd-flag-182000) Calling .Create
	I0815 17:03:05.709735    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:03:05.709877    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | I0815 17:03:05.709730    6453 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19452-977/.minikube
	I0815 17:03:05.709938    6445 main.go:141] libmachine: (force-systemd-flag-182000) Downloading /Users/jenkins/minikube-integration/19452-977/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19452-977/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0815 17:03:05.895245    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | I0815 17:03:05.895181    6453 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-flag-182000/id_rsa...
	I0815 17:03:05.949777    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | I0815 17:03:05.949685    6453 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-flag-182000/force-systemd-flag-182000.rawdisk...
	I0815 17:03:05.949791    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Writing magic tar header
	I0815 17:03:05.949803    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Writing SSH key tar header
	I0815 17:03:05.950223    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | I0815 17:03:05.950184    6453 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-flag-182000 ...
	I0815 17:03:06.325515    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:03:06.325533    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-flag-182000/hyperkit.pid
	I0815 17:03:06.325547    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Using UUID 0215acc2-9567-4482-9aa1-b8b0da619f57
	I0815 17:03:06.350202    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Generated MAC b6:db:e2:e2:82:57
	I0815 17:03:06.350219    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-182000
	I0815 17:03:06.350257    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | 2024/08/15 17:03:06 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-flag-182000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"0215acc2-9567-4482-9aa1-b8b0da619f57", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-flag-182000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-flag-182000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-flag-182000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]st
ring(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0815 17:03:06.350286    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | 2024/08/15 17:03:06 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-flag-182000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"0215acc2-9567-4482-9aa1-b8b0da619f57", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-flag-182000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-flag-182000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-flag-182000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]st
ring(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0815 17:03:06.350325    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | 2024/08/15 17:03:06 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-flag-182000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "0215acc2-9567-4482-9aa1-b8b0da619f57", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-flag-182000/force-systemd-flag-182000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-flag-182000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-flag-182000/tty,log=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-flag-182000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-sy
stemd-flag-182000/bzimage,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-flag-182000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-182000"}
	I0815 17:03:06.350368    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | 2024/08/15 17:03:06 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-flag-182000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 0215acc2-9567-4482-9aa1-b8b0da619f57 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-flag-182000/force-systemd-flag-182000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-flag-182000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-flag-182000/tty,log=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-flag-182000/console-ring -f kexec,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-flag-182000/bzimage,/Users/jenkins/minikube-integration/
19452-977/.minikube/machines/force-systemd-flag-182000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-182000"
	I0815 17:03:06.350381    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | 2024/08/15 17:03:06 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0815 17:03:06.353427    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | 2024/08/15 17:03:06 DEBUG: hyperkit: Pid is 6456
	I0815 17:03:06.354340    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Attempt 0
	I0815 17:03:06.354360    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:03:06.354390    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6456
	I0815 17:03:06.355321    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Searching for b6:db:e2:e2:82:57 in /var/db/dhcpd_leases ...
	I0815 17:03:06.355403    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Found 22 entries in /var/db/dhcpd_leases!
	I0815 17:03:06.355420    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66bfe870}
	I0815 17:03:06.355431    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:03:06.355452    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:03:06.355460    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:03:06.355468    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:03:06.355474    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:03:06.355481    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:03:06.355487    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:03:06.355496    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:03:06.355505    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:03:06.355517    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:03:06.355526    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:03:06.355535    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:03:06.355544    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:03:06.355572    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:03:06.355588    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:03:06.355598    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:03:06.355606    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:03:06.355652    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:03:06.355682    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:03:06.355699    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:03:06.355726    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:03:06.361075    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | 2024/08/15 17:03:06 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0815 17:03:06.369412    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | 2024/08/15 17:03:06 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-flag-182000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0815 17:03:06.370253    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | 2024/08/15 17:03:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0815 17:03:06.370276    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | 2024/08/15 17:03:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0815 17:03:06.370308    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | 2024/08/15 17:03:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0815 17:03:06.370324    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | 2024/08/15 17:03:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0815 17:03:06.749319    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | 2024/08/15 17:03:06 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0815 17:03:06.749333    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | 2024/08/15 17:03:06 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0815 17:03:06.864170    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | 2024/08/15 17:03:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0815 17:03:06.864188    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | 2024/08/15 17:03:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0815 17:03:06.864199    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | 2024/08/15 17:03:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0815 17:03:06.864207    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | 2024/08/15 17:03:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0815 17:03:06.865168    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | 2024/08/15 17:03:06 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0815 17:03:06.865180    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | 2024/08/15 17:03:06 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0815 17:03:08.356640    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Attempt 1
	I0815 17:03:08.356660    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:03:08.356713    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6456
	I0815 17:03:08.357511    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Searching for b6:db:e2:e2:82:57 in /var/db/dhcpd_leases ...
	I0815 17:03:08.357527    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Found 22 entries in /var/db/dhcpd_leases!
	I0815 17:03:08.357551    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66bfe870}
	I0815 17:03:08.357563    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:03:08.357571    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:03:08.357578    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:03:08.357593    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:03:08.357610    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:03:08.357618    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:03:08.357624    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:03:08.357656    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:03:08.357670    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:03:08.357678    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:03:08.357686    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:03:08.357693    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:03:08.357702    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:03:08.357709    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:03:08.357717    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:03:08.357725    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:03:08.357733    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:03:08.357740    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:03:08.357748    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:03:08.357754    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:03:08.357762    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:03:10.359539    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Attempt 2
	I0815 17:03:10.359554    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:03:10.359667    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6456
	I0815 17:03:10.360452    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Searching for b6:db:e2:e2:82:57 in /var/db/dhcpd_leases ...
	I0815 17:03:10.360503    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Found 22 entries in /var/db/dhcpd_leases!
	I0815 17:03:10.360517    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66bfe870}
	I0815 17:03:10.360530    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:03:10.360541    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:03:10.360549    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:03:10.360557    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:03:10.360565    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:03:10.360571    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:03:10.360578    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:03:10.360584    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:03:10.360595    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:03:10.360604    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:03:10.360613    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:03:10.360620    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:03:10.360628    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:03:10.360635    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:03:10.360643    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:03:10.360651    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:03:10.360659    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:03:10.360668    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:03:10.360675    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:03:10.360690    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:03:10.360698    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:03:12.253757    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | 2024/08/15 17:03:12 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0815 17:03:12.253958    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | 2024/08/15 17:03:12 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0815 17:03:12.253970    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | 2024/08/15 17:03:12 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0815 17:03:12.280320    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | 2024/08/15 17:03:12 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0815 17:03:12.362886    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Attempt 3
	I0815 17:03:12.362912    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:03:12.363001    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6456
	I0815 17:03:12.364510    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Searching for b6:db:e2:e2:82:57 in /var/db/dhcpd_leases ...
	I0815 17:03:12.364606    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Found 22 entries in /var/db/dhcpd_leases!
	I0815 17:03:12.364627    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66bfe870}
	I0815 17:03:12.364645    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:03:12.364659    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:03:12.364693    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:03:12.364721    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:03:12.364737    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:03:12.364768    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:03:12.364787    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:03:12.364802    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:03:12.364814    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:03:12.364841    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:03:12.364875    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:03:12.364890    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:03:12.364907    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:03:12.364930    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:03:12.364946    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:03:12.364969    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:03:12.364985    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:03:12.365003    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:03:12.365014    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:03:12.365034    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:03:12.365056    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:03:14.365938    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Attempt 4
	I0815 17:03:14.365953    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:03:14.366052    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6456
	I0815 17:03:14.366824    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Searching for b6:db:e2:e2:82:57 in /var/db/dhcpd_leases ...
	I0815 17:03:14.366880    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Found 22 entries in /var/db/dhcpd_leases!
	I0815 17:03:14.366888    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66bfe870}
	I0815 17:03:14.366904    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:03:14.366929    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:03:14.366940    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:03:14.366949    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:03:14.366955    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:03:14.366962    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:03:14.366971    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:03:14.366988    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:03:14.367002    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:03:14.367011    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:03:14.367020    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:03:14.367027    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:03:14.367036    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:03:14.367046    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:03:14.367055    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:03:14.367067    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:03:14.367076    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:03:14.367083    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:03:14.367092    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:03:14.367099    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:03:14.367107    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:03:16.369172    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Attempt 5
	I0815 17:03:16.369188    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:03:16.369256    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6456
	I0815 17:03:16.370040    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Searching for b6:db:e2:e2:82:57 in /var/db/dhcpd_leases ...
	I0815 17:03:16.370104    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Found 22 entries in /var/db/dhcpd_leases!
	I0815 17:03:16.370118    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:03:16.370127    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:03:16.370134    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:03:16.370141    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:03:16.370148    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:03:16.370165    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:03:16.370181    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:03:16.370194    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:03:16.370204    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:03:16.370213    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:03:16.370221    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:03:16.370232    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:03:16.370239    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:03:16.370246    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:03:16.370254    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:03:16.370273    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:03:16.370283    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:03:16.370291    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:03:16.370305    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:03:16.370316    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:03:16.370331    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:03:16.370346    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:03:18.372103    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Attempt 6
	I0815 17:03:18.372118    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:03:18.372161    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6456
	I0815 17:03:18.373018    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Searching for b6:db:e2:e2:82:57 in /var/db/dhcpd_leases ...
	I0815 17:03:18.373082    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Found 22 entries in /var/db/dhcpd_leases!
	I0815 17:03:18.373092    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:03:18.373101    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:03:18.373106    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:03:18.373113    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:03:18.373119    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:03:18.373133    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:03:18.373144    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:03:18.373151    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:03:18.373159    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:03:18.373170    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:03:18.373179    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:03:18.373186    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:03:18.373192    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:03:18.373209    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:03:18.373221    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:03:18.373229    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:03:18.373238    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:03:18.373245    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:03:18.373254    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:03:18.373261    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:03:18.373270    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:03:18.373279    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:03:20.375336    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Attempt 7
	I0815 17:03:20.375351    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:03:20.375418    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6456
	I0815 17:03:20.376265    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Searching for b6:db:e2:e2:82:57 in /var/db/dhcpd_leases ...
	I0815 17:03:20.376312    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Found 22 entries in /var/db/dhcpd_leases!
	I0815 17:03:20.376324    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:03:20.376346    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:03:20.376358    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:03:20.376369    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:03:20.376378    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:03:20.376386    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:03:20.376394    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:03:20.376401    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:03:20.376407    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:03:20.376422    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:03:20.376433    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:03:20.376443    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:03:20.376456    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:03:20.376471    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:03:20.376480    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:03:20.376488    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:03:20.376496    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:03:20.376503    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:03:20.376512    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:03:20.376532    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:03:20.376547    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:03:20.376562    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:03:22.377054    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Attempt 8
	I0815 17:03:22.377066    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:03:22.377150    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6456
	I0815 17:03:22.377919    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Searching for b6:db:e2:e2:82:57 in /var/db/dhcpd_leases ...
	I0815 17:03:22.377966    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Found 22 entries in /var/db/dhcpd_leases!
	I0815 17:03:22.377977    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:03:22.377986    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:03:22.377992    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:03:22.377999    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:03:22.378006    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:03:22.378022    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:03:22.378034    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:03:22.378049    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:03:22.378068    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:03:22.378081    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:03:22.378088    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:03:22.378105    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:03:22.378114    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:03:22.378121    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:03:22.378128    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:03:22.378139    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:03:22.378149    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:03:22.378156    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:03:22.378164    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:03:22.378172    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:03:22.378185    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:03:22.378194    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:03:24.380227    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Attempt 9
	I0815 17:03:24.380241    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:03:24.380370    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6456
	I0815 17:03:24.381195    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Searching for b6:db:e2:e2:82:57 in /var/db/dhcpd_leases ...
	I0815 17:03:24.381242    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Found 22 entries in /var/db/dhcpd_leases!
	I0815 17:03:24.381253    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:03:24.381262    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:03:24.381270    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:03:24.381287    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:03:24.381295    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:03:24.381303    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:03:24.381312    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:03:24.381319    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:03:24.381326    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:03:24.381336    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:03:24.381344    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:03:24.381351    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:03:24.381358    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:03:24.381366    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:03:24.381375    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:03:24.381384    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:03:24.381391    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:03:24.381404    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:03:24.381418    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:03:24.381430    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:03:24.381438    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:03:24.381447    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:03:26.381860    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Attempt 10
	I0815 17:03:26.381883    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:03:26.382000    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6456
	I0815 17:03:26.382783    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Searching for b6:db:e2:e2:82:57 in /var/db/dhcpd_leases ...
	I0815 17:03:26.382856    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Found 22 entries in /var/db/dhcpd_leases!
	I0815 17:03:26.382866    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:03:26.382877    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:03:26.382885    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:03:26.382892    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:03:26.382901    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:03:26.382910    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:03:26.382917    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:03:26.382924    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:03:26.382932    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:03:26.382940    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:03:26.382948    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:03:26.382963    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:03:26.382982    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:03:26.382994    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:03:26.383002    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:03:26.383010    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:03:26.383017    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:03:26.383024    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:03:26.383032    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:03:26.383048    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:03:26.383060    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:03:26.383070    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:03:28.384411    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Attempt 11
	I0815 17:03:28.384423    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:03:28.384499    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6456
	I0815 17:03:28.385298    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Searching for b6:db:e2:e2:82:57 in /var/db/dhcpd_leases ...
	I0815 17:03:28.385343    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Found 22 entries in /var/db/dhcpd_leases!
	I0815 17:03:28.385354    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:03:28.385362    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:03:28.385370    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:03:28.385386    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:03:28.385401    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:03:28.385409    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:03:28.385414    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:03:28.385432    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:03:28.385444    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:03:28.385453    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:03:28.385462    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:03:28.385472    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:03:28.385482    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:03:28.385489    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:03:28.385496    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:03:28.385502    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:03:28.385508    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:03:28.385515    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:03:28.385521    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:03:28.385528    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:03:28.385535    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:03:28.385544    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:03:30.385865    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Attempt 12
	I0815 17:03:30.385875    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:03:30.385911    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6456
	I0815 17:03:30.387080    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Searching for b6:db:e2:e2:82:57 in /var/db/dhcpd_leases ...
	I0815 17:03:30.387141    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Found 22 entries in /var/db/dhcpd_leases!
	I0815 17:03:30.387153    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:03:30.387168    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:03:30.387175    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:03:30.387182    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:03:30.387195    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:03:30.387201    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:03:30.387209    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:03:30.387222    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:03:30.387230    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:03:30.387238    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:03:30.387249    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:03:30.387257    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:03:30.387270    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:03:30.387278    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:03:30.387286    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:03:30.387295    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:03:30.387301    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:03:30.387310    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:03:30.387317    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:03:30.387325    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:03:30.387332    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:03:30.387340    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:03:32.387660    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Attempt 13
	I0815 17:03:32.387676    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:03:32.387736    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6456
	I0815 17:03:32.388562    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Searching for b6:db:e2:e2:82:57 in /var/db/dhcpd_leases ...
	I0815 17:03:32.388612    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Found 22 entries in /var/db/dhcpd_leases!
	I0815 17:03:32.388628    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:03:32.388647    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:03:32.388660    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:03:32.388669    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:03:32.388678    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:03:32.388695    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:03:32.388707    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:03:32.388715    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:03:32.388724    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:03:32.388731    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:03:32.388739    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:03:32.388746    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:03:32.388754    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:03:32.388767    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:03:32.388776    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:03:32.388786    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:03:32.388793    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:03:32.388800    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:03:32.388806    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:03:32.388814    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:03:32.388827    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:03:32.388837    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:03:34.389783    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Attempt 14
	I0815 17:03:34.389797    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:03:34.389866    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6456
	I0815 17:03:34.390761    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Searching for b6:db:e2:e2:82:57 in /var/db/dhcpd_leases ...
	I0815 17:03:34.390827    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Found 22 entries in /var/db/dhcpd_leases!
	I0815 17:03:34.390839    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:03:34.390861    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:03:34.390871    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:03:34.390888    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:03:34.390902    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:03:34.390915    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:03:34.390925    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:03:34.390933    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:03:34.390941    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:03:34.390949    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:03:34.390957    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:03:34.390970    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:03:34.390977    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:03:34.390994    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:03:34.391014    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:03:34.391030    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:03:34.391041    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:03:34.391048    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:03:34.391056    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:03:34.391072    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:03:34.391084    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:03:34.391094    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:03:36.393066    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Attempt 15
	I0815 17:03:36.393082    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:03:36.393206    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6456
	I0815 17:03:36.393968    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Searching for b6:db:e2:e2:82:57 in /var/db/dhcpd_leases ...
	I0815 17:03:36.394016    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Found 22 entries in /var/db/dhcpd_leases!
	I0815 17:03:36.394025    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:03:36.394034    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:03:36.394042    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:03:36.394050    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:03:36.394056    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:03:36.394075    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:03:36.394095    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:03:36.394109    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:03:36.394120    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:03:36.394131    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:03:36.394137    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:03:36.394144    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:03:36.394151    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:03:36.394162    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:03:36.394170    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:03:36.394177    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:03:36.394185    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:03:36.394199    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:03:36.394212    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:03:36.394232    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:03:36.394242    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:03:36.394251    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:03:38.396246    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Attempt 16
	I0815 17:03:38.396259    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:03:38.396326    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6456
	I0815 17:03:38.397357    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Searching for b6:db:e2:e2:82:57 in /var/db/dhcpd_leases ...
	I0815 17:03:38.397414    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Found 22 entries in /var/db/dhcpd_leases!
	I0815 17:03:38.397426    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:03:38.397435    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:03:38.397443    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:03:38.397450    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:03:38.397457    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:03:38.397464    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:03:38.397471    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:03:38.397490    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:03:38.397503    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:03:38.397511    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:03:38.397521    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:03:38.397531    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:03:38.397540    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:03:38.397547    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:03:38.397555    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:03:38.397563    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:03:38.397577    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:03:38.397634    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:03:38.397645    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:03:38.397653    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:03:38.397661    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:03:38.397670    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:03:40.398852    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Attempt 17
	I0815 17:03:40.398870    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:03:40.398932    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6456
	I0815 17:03:40.399717    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Searching for b6:db:e2:e2:82:57 in /var/db/dhcpd_leases ...
	I0815 17:03:40.399777    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Found 22 entries in /var/db/dhcpd_leases!
	I0815 17:03:40.399788    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:03:40.399801    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:03:40.399809    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:03:40.399815    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:03:40.399821    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:03:40.399828    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:03:40.399835    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:03:40.399841    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:03:40.399850    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:03:40.399857    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:03:40.399865    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:03:40.399881    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:03:40.399894    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:03:40.399903    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:03:40.399915    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:03:40.399922    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:03:40.399931    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:03:40.399938    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:03:40.399947    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:03:40.399961    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:03:40.399975    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:03:40.399985    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:03:42.401434    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Attempt 18
	I0815 17:03:42.401449    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:03:42.401508    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6456
	I0815 17:03:42.402305    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Searching for b6:db:e2:e2:82:57 in /var/db/dhcpd_leases ...
	I0815 17:03:42.402377    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Found 22 entries in /var/db/dhcpd_leases!
	I0815 17:03:42.402387    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:03:42.402399    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:03:42.402420    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:03:42.402431    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:03:42.402442    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:03:42.402450    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:03:42.402456    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:03:42.402464    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:03:42.402486    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:03:42.402497    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:03:42.402507    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:03:42.402514    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:03:42.402521    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:03:42.402528    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:03:42.402544    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:03:42.402556    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:03:42.402564    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:03:42.402572    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:03:42.402579    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:03:42.402587    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:03:42.402595    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:03:42.402602    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:03:44.404097    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Attempt 19
	I0815 17:03:44.404112    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:03:44.404175    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6456
	I0815 17:03:44.404960    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Searching for b6:db:e2:e2:82:57 in /var/db/dhcpd_leases ...
	I0815 17:03:44.405007    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Found 22 entries in /var/db/dhcpd_leases!
	I0815 17:03:44.405027    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:03:44.405046    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:03:44.405053    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:03:44.405060    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:03:44.405070    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:03:44.405092    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:03:44.405107    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:03:44.405116    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:03:44.405124    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:03:44.405134    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:03:44.405143    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:03:44.405150    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:03:44.405158    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:03:44.405177    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:03:44.405186    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:03:44.405193    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:03:44.405202    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:03:44.405209    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:03:44.405215    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:03:44.405233    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:03:44.405246    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:03:44.405256    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:03:46.407246    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Attempt 20
	I0815 17:03:46.407260    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:03:46.407329    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6456
	I0815 17:03:46.408157    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Searching for b6:db:e2:e2:82:57 in /var/db/dhcpd_leases ...
	I0815 17:03:46.408201    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Found 22 entries in /var/db/dhcpd_leases!
	I0815 17:03:46.408215    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:03:46.408229    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:03:46.408236    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:03:46.408244    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:03:46.408265    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:03:46.408284    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:03:46.408297    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:03:46.408307    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:03:46.408316    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:03:46.408324    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:03:46.408332    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:03:46.408345    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:03:46.408355    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:03:46.408365    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:03:46.408372    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:03:46.408379    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:03:46.408384    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:03:46.408390    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:03:46.408401    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:03:46.408410    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:03:46.408430    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:03:46.408441    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:03:48.409254    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Attempt 21
	I0815 17:03:48.409269    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:03:48.409338    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6456
	I0815 17:03:48.410172    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Searching for b6:db:e2:e2:82:57 in /var/db/dhcpd_leases ...
	I0815 17:03:48.410216    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Found 22 entries in /var/db/dhcpd_leases!
	I0815 17:03:48.410227    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:03:48.410236    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:03:48.410244    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:03:48.410258    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:03:48.410268    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:03:48.410282    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:03:48.410293    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:03:48.410306    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:03:48.410316    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:03:48.410325    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:03:48.410334    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:03:48.410342    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:03:48.410350    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:03:48.410358    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:03:48.410364    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:03:48.410375    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:03:48.410389    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:03:48.410400    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:03:48.410411    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:03:48.410419    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:03:48.410427    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:03:48.410436    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:03:50.411948    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Attempt 22
	I0815 17:03:50.411961    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:03:50.412042    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6456
	I0815 17:03:50.412792    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Searching for b6:db:e2:e2:82:57 in /var/db/dhcpd_leases ...
	I0815 17:03:50.412865    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Found 22 entries in /var/db/dhcpd_leases!
	I0815 17:03:50.412878    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:03:50.412893    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:03:50.412901    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:03:50.412907    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:03:50.412914    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:03:50.412921    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:03:50.412929    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:03:50.412936    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:03:50.412942    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:03:50.412949    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:03:50.412955    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:03:50.412963    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:03:50.412974    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:03:50.412982    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:03:50.412990    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:03:50.412996    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:03:50.413004    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:03:50.413017    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:03:50.413024    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:03:50.413033    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:03:50.413039    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:03:50.413046    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:03:52.413595    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Attempt 23
	I0815 17:03:52.413608    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:03:52.413646    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6456
	I0815 17:03:52.414536    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Searching for b6:db:e2:e2:82:57 in /var/db/dhcpd_leases ...
	I0815 17:03:52.414565    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Found 22 entries in /var/db/dhcpd_leases!
	I0815 17:03:52.414579    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:03:52.414589    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:03:52.414607    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:03:52.414619    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:03:52.414627    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:03:52.414644    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:03:52.414651    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:03:52.414658    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:03:52.414665    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:03:52.414673    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:03:52.414703    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:03:52.414716    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:03:52.414725    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:03:52.414733    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:03:52.414740    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:03:52.414748    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:03:52.414755    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:03:52.414763    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:03:52.414770    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:03:52.414778    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:03:52.414786    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:03:52.414793    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:03:54.416807    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Attempt 24
	I0815 17:03:54.416822    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:03:54.416868    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6456
	I0815 17:03:54.417827    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Searching for b6:db:e2:e2:82:57 in /var/db/dhcpd_leases ...
	I0815 17:03:54.417886    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Found 22 entries in /var/db/dhcpd_leases!
	I0815 17:03:54.417899    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:03:54.417908    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:03:54.417915    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:03:54.417922    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:03:54.417929    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:03:54.417935    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:03:54.417944    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:03:54.417960    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:03:54.417973    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:03:54.417981    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:03:54.417990    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:03:54.417997    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:03:54.418006    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:03:54.418024    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:03:54.418032    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:03:54.418041    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:03:54.418049    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:03:54.418057    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:03:54.418065    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:03:54.418077    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:03:54.418085    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:03:54.418097    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:03:56.420149    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Attempt 25
	I0815 17:03:56.420164    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:03:56.420213    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6456
	I0815 17:03:56.421045    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Searching for b6:db:e2:e2:82:57 in /var/db/dhcpd_leases ...
	I0815 17:03:56.421086    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Found 22 entries in /var/db/dhcpd_leases!
	I0815 17:03:56.421095    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:03:56.421113    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:03:56.421119    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:03:56.421127    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:03:56.421133    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:03:56.421141    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:03:56.421149    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:03:56.421164    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:03:56.421178    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:03:56.421188    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:03:56.421196    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:03:56.421204    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:03:56.421211    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:03:56.421219    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:03:56.421224    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:03:56.421245    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:03:56.421259    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:03:56.421286    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:03:56.421296    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:03:56.421307    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:03:56.421315    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:03:56.421324    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:03:58.421897    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Attempt 26
	I0815 17:03:58.421909    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:03:58.422032    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6456
	I0815 17:03:58.422794    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Searching for b6:db:e2:e2:82:57 in /var/db/dhcpd_leases ...
	I0815 17:03:58.422853    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Found 22 entries in /var/db/dhcpd_leases!
	I0815 17:03:58.422868    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:03:58.422879    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:03:58.422891    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:03:58.422900    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:03:58.422907    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:03:58.422916    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:03:58.422931    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:03:58.422944    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:03:58.422953    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:03:58.422960    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:03:58.422973    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:03:58.422984    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:03:58.422993    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:03:58.423001    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:03:58.423016    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:03:58.423030    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:03:58.423039    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:03:58.423044    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:03:58.423068    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:03:58.423077    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:03:58.423085    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:03:58.423094    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:04:00.425090    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Attempt 27
	I0815 17:04:00.425105    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:04:00.425158    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6456
	I0815 17:04:00.425976    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Searching for b6:db:e2:e2:82:57 in /var/db/dhcpd_leases ...
	I0815 17:04:00.426020    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Found 22 entries in /var/db/dhcpd_leases!
	I0815 17:04:00.426031    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:04:00.426050    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:04:00.426058    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:04:00.426065    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:04:00.426072    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:04:00.426080    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:04:00.426087    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:04:00.426101    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:04:00.426109    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:04:00.426116    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:04:00.426125    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:04:00.426142    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:04:00.426155    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:04:00.426164    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:04:00.426170    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:04:00.426185    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:04:00.426197    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:04:00.426207    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:04:00.426214    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:04:00.426222    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:04:00.426229    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:04:00.426238    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:04:02.427297    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Attempt 28
	I0815 17:04:02.427313    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:04:02.427387    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6456
	I0815 17:04:02.428196    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Searching for b6:db:e2:e2:82:57 in /var/db/dhcpd_leases ...
	I0815 17:04:02.428253    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Found 22 entries in /var/db/dhcpd_leases!
	I0815 17:04:02.428264    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:04:02.428278    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:04:02.428286    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:04:02.428293    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:04:02.428299    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:04:02.428307    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:04:02.428313    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:04:02.428321    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:04:02.428330    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:04:02.428346    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:04:02.428358    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:04:02.428368    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:04:02.428376    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:04:02.428384    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:04:02.428392    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:04:02.428407    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:04:02.428420    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:04:02.428429    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:04:02.428435    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:04:02.428450    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:04:02.428465    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:04:02.428475    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:04:04.430462    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Attempt 29
	I0815 17:04:04.430473    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:04:04.430572    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6456
	I0815 17:04:04.431326    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Searching for b6:db:e2:e2:82:57 in /var/db/dhcpd_leases ...
	I0815 17:04:04.431383    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Found 22 entries in /var/db/dhcpd_leases!
	I0815 17:04:04.431396    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:04:04.431406    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:04:04.431413    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:04:04.431420    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:04:04.431426    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:04:04.431437    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:04:04.431443    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:04:04.431450    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:04:04.431459    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:04:04.431475    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:04:04.431484    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:04:04.431493    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:04:04.431500    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:04:04.431507    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:04:04.431516    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:04:04.431523    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:04:04.431529    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:04:04.431542    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:04:04.431555    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:04:04.431573    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:04:04.431585    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:04:04.431596    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:04:06.433707    6445 client.go:171] duration metric: took 1m0.732556525s to LocalClient.Create
	I0815 17:04:08.434441    6445 start.go:128] duration metric: took 1m2.766147575s to createHost
	I0815 17:04:08.434455    6445 start.go:83] releasing machines lock for "force-systemd-flag-182000", held for 1m2.766314358s
	W0815 17:04:08.434472    6445 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for b6:db:e2:e2:82:57
	I0815 17:04:08.434825    6445 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 17:04:08.434853    6445 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 17:04:08.443732    6445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:54506
	I0815 17:04:08.444161    6445 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:04:08.444604    6445 main.go:141] libmachine: Using API Version  1
	I0815 17:04:08.444638    6445 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:04:08.444932    6445 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:04:08.445314    6445 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 17:04:08.445355    6445 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 17:04:08.453979    6445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:54508
	I0815 17:04:08.454456    6445 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:04:08.454836    6445 main.go:141] libmachine: Using API Version  1
	I0815 17:04:08.454871    6445 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:04:08.455076    6445 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:04:08.455192    6445 main.go:141] libmachine: (force-systemd-flag-182000) Calling .GetState
	I0815 17:04:08.455279    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:04:08.455373    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6456
	I0815 17:04:08.456302    6445 main.go:141] libmachine: (force-systemd-flag-182000) Calling .DriverName
	I0815 17:04:08.497655    6445 out.go:177] * Deleting "force-systemd-flag-182000" in hyperkit ...
	I0815 17:04:08.519807    6445 main.go:141] libmachine: (force-systemd-flag-182000) Calling .Remove
	I0815 17:04:08.519932    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:04:08.519948    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:04:08.520013    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6456
	I0815 17:04:08.520923    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:04:08.520991    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | waiting for graceful shutdown
	I0815 17:04:09.522336    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:04:09.522494    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6456
	I0815 17:04:09.523386    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | waiting for graceful shutdown
	I0815 17:04:10.525122    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:04:10.525219    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6456
	I0815 17:04:10.526850    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | waiting for graceful shutdown
	I0815 17:04:11.529039    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:04:11.529112    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6456
	I0815 17:04:11.529684    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | waiting for graceful shutdown
	I0815 17:04:12.530647    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:04:12.530697    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6456
	I0815 17:04:12.531278    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | waiting for graceful shutdown
	I0815 17:04:13.532177    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:04:13.532255    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6456
	I0815 17:04:13.533368    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | sending sigkill
	I0815 17:04:13.533378    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	W0815 17:04:13.545186    6445 out.go:270] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for b6:db:e2:e2:82:57
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for b6:db:e2:e2:82:57
	I0815 17:04:13.545202    6445 start.go:729] Will try again in 5 seconds ...
	I0815 17:04:13.555556    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | 2024/08/15 17:04:13 WARN : hyperkit: failed to read stdout: EOF
	I0815 17:04:13.555574    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | 2024/08/15 17:04:13 WARN : hyperkit: failed to read stderr: EOF
	I0815 17:04:18.546943    6445 start.go:360] acquireMachinesLock for force-systemd-flag-182000: {Name:mkac8034c8a60617271f2754a03dcaf74fc4fef7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 17:04:23.581337    6445 start.go:364] duration metric: took 5.034271361s to acquireMachinesLock for "force-systemd-flag-182000"
	I0815 17:04:23.581374    6445 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-182000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-182000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 17:04:23.581437    6445 start.go:125] createHost starting for "" (driver="hyperkit")
	I0815 17:04:23.603761    6445 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0815 17:04:23.603912    6445 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 17:04:23.603969    6445 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 17:04:23.614037    6445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:54533
	I0815 17:04:23.614362    6445 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:04:23.614686    6445 main.go:141] libmachine: Using API Version  1
	I0815 17:04:23.614696    6445 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:04:23.614932    6445 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:04:23.615039    6445 main.go:141] libmachine: (force-systemd-flag-182000) Calling .GetMachineName
	I0815 17:04:23.615122    6445 main.go:141] libmachine: (force-systemd-flag-182000) Calling .DriverName
	I0815 17:04:23.615223    6445 start.go:159] libmachine.API.Create for "force-systemd-flag-182000" (driver="hyperkit")
	I0815 17:04:23.615246    6445 client.go:168] LocalClient.Create starting
	I0815 17:04:23.615279    6445 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem
	I0815 17:04:23.615334    6445 main.go:141] libmachine: Decoding PEM data...
	I0815 17:04:23.615347    6445 main.go:141] libmachine: Parsing certificate...
	I0815 17:04:23.615400    6445 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem
	I0815 17:04:23.615438    6445 main.go:141] libmachine: Decoding PEM data...
	I0815 17:04:23.615457    6445 main.go:141] libmachine: Parsing certificate...
	I0815 17:04:23.615470    6445 main.go:141] libmachine: Running pre-create checks...
	I0815 17:04:23.615476    6445 main.go:141] libmachine: (force-systemd-flag-182000) Calling .PreCreateCheck
	I0815 17:04:23.615551    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:04:23.615587    6445 main.go:141] libmachine: (force-systemd-flag-182000) Calling .GetConfigRaw
	I0815 17:04:23.626023    6445 main.go:141] libmachine: Creating machine...
	I0815 17:04:23.626069    6445 main.go:141] libmachine: (force-systemd-flag-182000) Calling .Create
	I0815 17:04:23.626369    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:04:23.626638    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | I0815 17:04:23.626351    6497 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19452-977/.minikube
	I0815 17:04:23.626782    6445 main.go:141] libmachine: (force-systemd-flag-182000) Downloading /Users/jenkins/minikube-integration/19452-977/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19452-977/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0815 17:04:23.816553    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | I0815 17:04:23.816488    6497 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-flag-182000/id_rsa...
	I0815 17:04:24.089012    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | I0815 17:04:24.088944    6497 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-flag-182000/force-systemd-flag-182000.rawdisk...
	I0815 17:04:24.089041    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Writing magic tar header
	I0815 17:04:24.089062    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Writing SSH key tar header
	I0815 17:04:24.110046    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | I0815 17:04:24.109998    6497 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-flag-182000 ...
	I0815 17:04:24.501998    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:04:24.502020    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-flag-182000/hyperkit.pid
	I0815 17:04:24.502034    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Using UUID e617c036-9083-4e00-a370-088c6a3ff978
	I0815 17:04:24.533600    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Generated MAC f2:99:76:f9:88:b
	I0815 17:04:24.533618    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-182000
	I0815 17:04:24.533662    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | 2024/08/15 17:04:24 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-flag-182000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"e617c036-9083-4e00-a370-088c6a3ff978", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000aea50)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-flag-182000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-flag-182000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-flag-182000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]st
ring(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0815 17:04:24.533686    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | 2024/08/15 17:04:24 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-flag-182000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"e617c036-9083-4e00-a370-088c6a3ff978", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000aea50)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-flag-182000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-flag-182000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-flag-182000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]st
ring(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0815 17:04:24.533742    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | 2024/08/15 17:04:24 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-flag-182000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "e617c036-9083-4e00-a370-088c6a3ff978", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-flag-182000/force-systemd-flag-182000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-flag-182000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-flag-182000/tty,log=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-flag-182000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-sy
stemd-flag-182000/bzimage,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-flag-182000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-182000"}
	I0815 17:04:24.533780    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | 2024/08/15 17:04:24 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-flag-182000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U e617c036-9083-4e00-a370-088c6a3ff978 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-flag-182000/force-systemd-flag-182000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-flag-182000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-flag-182000/tty,log=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-flag-182000/console-ring -f kexec,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-flag-182000/bzimage,/Users/jenkins/minikube-integration/
19452-977/.minikube/machines/force-systemd-flag-182000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-182000"
	I0815 17:04:24.533789    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | 2024/08/15 17:04:24 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0815 17:04:24.536730    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | 2024/08/15 17:04:24 DEBUG: hyperkit: Pid is 6499
	I0815 17:04:24.537166    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Attempt 0
	I0815 17:04:24.537182    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:04:24.537228    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6499
	I0815 17:04:24.538127    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Searching for f2:99:76:f9:88:b in /var/db/dhcpd_leases ...
	I0815 17:04:24.538194    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:04:24.538210    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe903}
	I0815 17:04:24.538230    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:04:24.538242    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:04:24.538257    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:04:24.538271    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:04:24.538286    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:04:24.538297    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:04:24.538324    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:04:24.538341    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:04:24.538375    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:04:24.538397    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:04:24.538437    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:04:24.538488    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:04:24.538500    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:04:24.538508    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:04:24.538516    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:04:24.538535    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:04:24.538546    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:04:24.538554    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:04:24.538562    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:04:24.538577    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:04:24.538591    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:04:24.538601    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:04:24.544376    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | 2024/08/15 17:04:24 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0815 17:04:24.552804    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | 2024/08/15 17:04:24 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-flag-182000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0815 17:04:24.553655    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | 2024/08/15 17:04:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0815 17:04:24.553674    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | 2024/08/15 17:04:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0815 17:04:24.553684    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | 2024/08/15 17:04:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0815 17:04:24.553692    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | 2024/08/15 17:04:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0815 17:04:24.933365    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | 2024/08/15 17:04:24 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0815 17:04:24.933381    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | 2024/08/15 17:04:24 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0815 17:04:25.048060    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | 2024/08/15 17:04:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0815 17:04:25.048086    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | 2024/08/15 17:04:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0815 17:04:25.048097    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | 2024/08/15 17:04:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0815 17:04:25.048130    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | 2024/08/15 17:04:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0815 17:04:25.049003    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | 2024/08/15 17:04:25 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0815 17:04:25.049015    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | 2024/08/15 17:04:25 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0815 17:04:26.538454    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Attempt 1
	I0815 17:04:26.538469    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:04:26.538606    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6499
	I0815 17:04:26.539403    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Searching for f2:99:76:f9:88:b in /var/db/dhcpd_leases ...
	I0815 17:04:26.539470    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:04:26.539479    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe903}
	I0815 17:04:26.539488    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:04:26.539497    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:04:26.539504    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:04:26.539531    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:04:26.539551    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:04:26.539560    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:04:26.539568    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:04:26.539574    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:04:26.539582    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:04:26.539591    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:04:26.539598    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:04:26.539604    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:04:26.539611    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:04:26.539619    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:04:26.539636    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:04:26.539645    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:04:26.539659    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:04:26.539674    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:04:26.539683    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:04:26.539697    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:04:26.539714    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:04:26.539744    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:04:28.540928    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Attempt 2
	I0815 17:04:28.540950    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:04:28.541052    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6499
	I0815 17:04:28.541908    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Searching for f2:99:76:f9:88:b in /var/db/dhcpd_leases ...
	I0815 17:04:28.541963    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:04:28.541981    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe903}
	I0815 17:04:28.541995    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:04:28.542005    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:04:28.542013    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:04:28.542020    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:04:28.542028    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:04:28.542037    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:04:28.542045    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:04:28.542053    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:04:28.542068    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:04:28.542079    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:04:28.542087    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:04:28.542096    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:04:28.542131    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:04:28.542143    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:04:28.542151    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:04:28.542158    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:04:28.542165    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:04:28.542171    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:04:28.542197    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:04:28.542207    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:04:28.542216    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:04:28.542223    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:04:30.453402    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | 2024/08/15 17:04:30 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0815 17:04:30.453568    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | 2024/08/15 17:04:30 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0815 17:04:30.453580    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | 2024/08/15 17:04:30 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0815 17:04:30.473329    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | 2024/08/15 17:04:30 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0815 17:04:30.542446    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Attempt 3
	I0815 17:04:30.542469    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:04:30.542604    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6499
	I0815 17:04:30.544329    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Searching for f2:99:76:f9:88:b in /var/db/dhcpd_leases ...
	I0815 17:04:30.544443    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:04:30.544457    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66be978c}
	I0815 17:04:30.544467    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:04:30.544475    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:04:30.544486    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:04:30.544497    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:04:30.544508    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:04:30.544517    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:04:30.544526    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:04:30.544534    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:04:30.544548    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:04:30.544565    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:04:30.544576    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:04:30.544586    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:04:30.544597    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:04:30.544608    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:04:30.544618    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:04:30.544629    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:04:30.544639    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:04:30.544653    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:04:30.544680    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:04:30.544699    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:04:30.544711    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:04:30.544722    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:04:32.544646    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Attempt 4
	I0815 17:04:32.544663    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:04:32.544730    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6499
	I0815 17:04:32.545541    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Searching for f2:99:76:f9:88:b in /var/db/dhcpd_leases ...
	I0815 17:04:32.545609    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:04:32.545618    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66be978c}
	I0815 17:04:32.545629    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:04:32.545635    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:04:32.545648    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:04:32.545662    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:04:32.545669    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:04:32.545681    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:04:32.545690    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:04:32.545703    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:04:32.545713    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:04:32.545721    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:04:32.545729    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:04:32.545736    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:04:32.545743    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:04:32.545751    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:04:32.545758    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:04:32.545768    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:04:32.545787    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:04:32.545801    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:04:32.545809    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:04:32.545818    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:04:32.545825    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:04:32.545834    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:04:34.546118    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Attempt 5
	I0815 17:04:34.546136    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:04:34.546179    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6499
	I0815 17:04:34.547209    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Searching for f2:99:76:f9:88:b in /var/db/dhcpd_leases ...
	I0815 17:04:34.547298    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:04:34.547309    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66be978c}
	I0815 17:04:34.547317    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:04:34.547325    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:04:34.547342    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:04:34.547353    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:04:34.547361    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:04:34.547371    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:04:34.547379    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:04:34.547385    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:04:34.547403    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:04:34.547414    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:04:34.547422    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:04:34.547431    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:04:34.547438    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:04:34.547447    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:04:34.547457    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:04:34.547467    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:04:34.547477    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:04:34.547485    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:04:34.547496    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:04:34.547505    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:04:34.547512    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:04:34.547520    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:04:36.547747    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Attempt 6
	I0815 17:04:36.547763    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:04:36.547823    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6499
	I0815 17:04:36.548599    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Searching for f2:99:76:f9:88:b in /var/db/dhcpd_leases ...
	I0815 17:04:36.548657    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:04:36.548669    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66be978c}
	I0815 17:04:36.548692    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:04:36.548706    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:04:36.548716    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:04:36.548724    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:04:36.548736    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:04:36.548747    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:04:36.548755    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:04:36.548761    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:04:36.548768    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:04:36.548790    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:04:36.548804    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:04:36.548819    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:04:36.548827    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:04:36.548836    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:04:36.548843    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:04:36.548850    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:04:36.548857    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:04:36.548865    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:04:36.548873    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:04:36.548881    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:04:36.548888    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:04:36.548897    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:04:38.549354    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Attempt 7
	I0815 17:04:38.549369    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:04:38.549479    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6499
	I0815 17:04:38.550239    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Searching for f2:99:76:f9:88:b in /var/db/dhcpd_leases ...
	I0815 17:04:38.550285    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:04:38.550297    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66be978c}
	I0815 17:04:38.550312    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:04:38.550322    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:04:38.550337    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:04:38.550352    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:04:38.550361    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:04:38.550373    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:04:38.550383    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:04:38.550392    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:04:38.550402    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:04:38.550412    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:04:38.550420    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:04:38.550426    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:04:38.550433    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:04:38.550441    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:04:38.550457    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:04:38.550471    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:04:38.550483    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:04:38.550492    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:04:38.550499    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:04:38.550508    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:04:38.550515    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:04:38.550523    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:04:40.550626    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Attempt 8
	I0815 17:04:40.550642    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:04:40.550734    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6499
	I0815 17:04:40.551531    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Searching for f2:99:76:f9:88:b in /var/db/dhcpd_leases ...
	I0815 17:04:40.551597    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:04:40.551607    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66be978c}
	I0815 17:04:40.551627    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:04:40.551634    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:04:40.551643    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:04:40.551658    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:04:40.551669    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:04:40.551677    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:04:40.551683    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:04:40.551691    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:04:40.551700    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:04:40.551708    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:04:40.551716    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:04:40.551733    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:04:40.551746    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:04:40.551754    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:04:40.551761    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:04:40.551770    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:04:40.551786    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:04:40.551794    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:04:40.551802    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:04:40.551812    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:04:40.551820    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:04:40.551836    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:04:42.553481    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Attempt 9
	I0815 17:04:42.553497    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:04:42.553574    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6499
	I0815 17:04:42.554450    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Searching for f2:99:76:f9:88:b in /var/db/dhcpd_leases ...
	I0815 17:04:42.554499    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:04:42.554512    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66be978c}
	I0815 17:04:42.554550    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:04:42.554561    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:04:42.554568    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:04:42.554575    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:04:42.554582    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:04:42.554590    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:04:42.554597    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:04:42.554605    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:04:42.554612    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:04:42.554617    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:04:42.554641    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:04:42.554654    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:04:42.554663    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:04:42.554669    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:04:42.554691    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:04:42.554705    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:04:42.554714    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:04:42.554722    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:04:42.554736    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:04:42.554745    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:04:42.554754    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:04:42.554762    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:04:44.556736    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Attempt 10
	I0815 17:04:44.556759    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:04:44.556867    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6499
	I0815 17:04:44.557633    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Searching for f2:99:76:f9:88:b in /var/db/dhcpd_leases ...
	I0815 17:04:44.557694    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:04:44.557707    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66be978c}
	I0815 17:04:44.557721    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:04:44.557733    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:04:44.557745    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:04:44.557753    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:04:44.557759    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:04:44.557773    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:04:44.557786    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:04:44.557798    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:04:44.557811    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:04:44.557821    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:04:44.557829    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:04:44.557845    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:04:44.557852    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:04:44.557860    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:04:44.557867    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:04:44.557874    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:04:44.557882    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:04:44.557890    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:04:44.557897    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:04:44.557905    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:04:44.557912    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:04:44.557920    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:04:46.559202    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Attempt 11
	I0815 17:04:46.559216    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:04:46.559272    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6499
	I0815 17:04:46.560039    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Searching for f2:99:76:f9:88:b in /var/db/dhcpd_leases ...
	I0815 17:04:46.560089    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:04:46.560098    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66be978c}
	I0815 17:04:46.560108    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:04:46.560123    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:04:46.560132    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:04:46.560138    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:04:46.560145    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:04:46.560150    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:04:46.560163    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:04:46.560173    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:04:46.560180    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:04:46.560189    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:04:46.560197    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:04:46.560206    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:04:46.560217    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:04:46.560226    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:04:46.560234    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:04:46.560242    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:04:46.560249    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:04:46.560257    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:04:46.560265    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:04:46.560272    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:04:46.560281    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:04:46.560290    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:04:48.560728    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Attempt 12
	I0815 17:04:48.560739    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:04:48.560818    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6499
	I0815 17:04:48.561646    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Searching for f2:99:76:f9:88:b in /var/db/dhcpd_leases ...
	I0815 17:04:48.561695    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:04:48.561707    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66be978c}
	I0815 17:04:48.561715    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:04:48.561729    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:04:48.561737    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:04:48.561751    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:04:48.561760    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:04:48.561766    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:04:48.561782    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:04:48.561796    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:04:48.561807    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:04:48.561816    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:04:48.561823    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:04:48.561831    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:04:48.561839    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:04:48.561855    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:04:48.561868    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:04:48.561880    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:04:48.561889    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:04:48.561897    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:04:48.561904    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:04:48.561911    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:04:48.561918    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:04:48.561926    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:04:50.563954    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Attempt 13
	I0815 17:04:50.563968    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:04:50.563994    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6499
	I0815 17:04:50.564874    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Searching for f2:99:76:f9:88:b in /var/db/dhcpd_leases ...
	I0815 17:04:50.564926    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:04:50.564935    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66be978c}
	I0815 17:04:50.564955    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:04:50.564978    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:04:50.564987    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:04:50.565001    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:04:50.565016    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:04:50.565025    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:04:50.565032    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:04:50.565040    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:04:50.565048    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:04:50.565067    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:04:50.565076    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:04:50.565084    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:04:50.565091    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:04:50.565100    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:04:50.565110    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:04:50.565119    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:04:50.565128    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:04:50.565142    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:04:50.565155    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:04:50.565172    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:04:50.565181    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:04:50.565190    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:04:52.565493    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Attempt 14
	I0815 17:04:52.565509    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:04:52.565636    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6499
	I0815 17:04:52.566445    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Searching for f2:99:76:f9:88:b in /var/db/dhcpd_leases ...
	I0815 17:04:52.566498    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:04:52.566509    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66be978c}
	I0815 17:04:52.566518    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:04:52.566525    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:04:52.566533    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:04:52.566540    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:04:52.566549    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:04:52.566558    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:04:52.566574    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:04:52.566580    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:04:52.566588    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:04:52.566599    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:04:52.566607    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:04:52.566615    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:04:52.566633    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:04:52.566646    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:04:52.566655    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:04:52.566663    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:04:52.566671    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:04:52.566677    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:04:52.566692    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:04:52.566703    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:04:52.566726    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:04:52.566734    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:04:54.566849    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Attempt 15
	I0815 17:04:54.566865    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:04:54.566949    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6499
	I0815 17:04:54.567745    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Searching for f2:99:76:f9:88:b in /var/db/dhcpd_leases ...
	I0815 17:04:54.567804    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:04:54.567819    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66be978c}
	I0815 17:04:54.567839    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:04:54.567850    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:04:54.567858    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:04:54.567866    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:04:54.567872    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:04:54.567883    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:04:54.567890    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:04:54.567910    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:04:54.567929    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:04:54.567942    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:04:54.567952    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:04:54.567960    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:04:54.567968    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:04:54.567977    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:04:54.567987    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:04:54.567996    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:04:54.568003    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:04:54.568015    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:04:54.568036    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:04:54.568049    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:04:54.568058    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:04:54.568069    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:04:56.569413    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Attempt 16
	I0815 17:04:56.569426    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:04:56.569494    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6499
	I0815 17:04:56.570254    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Searching for f2:99:76:f9:88:b in /var/db/dhcpd_leases ...
	I0815 17:04:56.570314    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:04:56.570327    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66be978c}
	I0815 17:04:56.570337    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:04:56.570344    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:04:56.570350    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:04:56.570357    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:04:56.570364    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:04:56.570369    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:04:56.570377    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:04:56.570387    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:04:56.570394    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:04:56.570401    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:04:56.570410    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:04:56.570419    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:04:56.570432    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:04:56.570447    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:04:56.570470    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:04:56.570483    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:04:56.570502    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:04:56.570511    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:04:56.570519    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:04:56.570528    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:04:56.570536    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:04:56.570544    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:04:58.570819    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Attempt 17
	I0815 17:04:58.570831    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:04:58.570918    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6499
	I0815 17:04:58.571706    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Searching for f2:99:76:f9:88:b in /var/db/dhcpd_leases ...
	I0815 17:04:58.571728    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:04:58.571744    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66be978c}
	I0815 17:04:58.571754    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:04:58.571761    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:04:58.571770    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:04:58.571776    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:04:58.571792    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:04:58.571802    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:04:58.571811    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:04:58.571819    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:04:58.571832    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:04:58.571840    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:04:58.571848    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:04:58.571855    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:04:58.571863    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:04:58.571870    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:04:58.571879    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:04:58.571887    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:04:58.571895    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:04:58.571902    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:04:58.571910    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:04:58.571917    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:04:58.571926    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:04:58.571935    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:05:00.573948    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Attempt 18
	I0815 17:05:00.573972    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:05:00.574042    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6499
	I0815 17:05:00.574820    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Searching for f2:99:76:f9:88:b in /var/db/dhcpd_leases ...
	I0815 17:05:00.574879    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:05:00.574888    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66be978c}
	I0815 17:05:00.574898    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:05:00.574906    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:05:00.574917    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:05:00.574923    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:05:00.574930    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:05:00.574937    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:05:00.574945    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:05:00.574951    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:05:00.574958    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:05:00.574970    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:05:00.574977    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:05:00.574986    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:05:00.574994    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:05:00.575002    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:05:00.575009    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:05:00.575014    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:05:00.575025    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:05:00.575034    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:05:00.575041    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:05:00.575048    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:05:00.575056    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:05:00.575063    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:05:02.577099    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Attempt 19
	I0815 17:05:02.577114    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:05:02.577185    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6499
	I0815 17:05:02.577994    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Searching for f2:99:76:f9:88:b in /var/db/dhcpd_leases ...
	I0815 17:05:02.578066    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:05:02.578076    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66be978c}
	I0815 17:05:02.578084    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:05:02.578090    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:05:02.578100    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:05:02.578110    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:05:02.578120    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:05:02.578126    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:05:02.578142    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:05:02.578155    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:05:02.578163    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:05:02.578173    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:05:02.578185    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:05:02.578194    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:05:02.578203    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:05:02.578217    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:05:02.578241    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:05:02.578259    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:05:02.578270    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:05:02.578279    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:05:02.578286    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:05:02.578299    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:05:02.578308    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:05:02.578317    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:05:04.580348    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Attempt 20
	I0815 17:05:04.580359    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:05:04.580474    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6499
	I0815 17:05:04.581511    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Searching for f2:99:76:f9:88:b in /var/db/dhcpd_leases ...
	I0815 17:05:04.581520    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:05:04.581530    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66be978c}
	I0815 17:05:04.581539    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:05:04.581547    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:05:04.581558    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:05:04.581565    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:05:04.581572    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:05:04.581579    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:05:04.581592    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:05:04.581602    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:05:04.581610    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:05:04.581624    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:05:04.581636    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:05:04.581645    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:05:04.581651    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:05:04.581662    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:05:04.581669    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:05:04.581677    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:05:04.581690    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:05:04.581698    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:05:04.581713    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:05:04.581728    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:05:04.581736    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:05:04.581744    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:05:06.582147    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Attempt 21
	I0815 17:05:06.582174    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:05:06.582253    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6499
	I0815 17:05:06.583120    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Searching for f2:99:76:f9:88:b in /var/db/dhcpd_leases ...
	I0815 17:05:06.583164    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:05:06.583182    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66be978c}
	I0815 17:05:06.583196    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:05:06.583207    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:05:06.583255    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:05:06.583275    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:05:06.583285    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:05:06.583291    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:05:06.583305    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:05:06.583318    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:05:06.583332    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:05:06.583341    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:05:06.583354    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:05:06.583363    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:05:06.583380    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:05:06.583392    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:05:06.583409    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:05:06.583418    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:05:06.583425    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:05:06.583432    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:05:06.583438    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:05:06.583445    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:05:06.583453    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:05:06.583461    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:05:08.583896    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Attempt 22
	I0815 17:05:08.583910    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:05:08.583972    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6499
	I0815 17:05:08.584780    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Searching for f2:99:76:f9:88:b in /var/db/dhcpd_leases ...
	I0815 17:05:08.584824    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:05:08.584835    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66be978c}
	I0815 17:05:08.584843    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:05:08.584850    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:05:08.584862    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:05:08.584869    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:05:08.584880    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:05:08.584886    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:05:08.584894    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:05:08.584902    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:05:08.584909    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:05:08.584917    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:05:08.584924    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:05:08.584931    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:05:08.584938    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:05:08.584944    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:05:08.584952    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:05:08.584958    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:05:08.584974    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:05:08.584982    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:05:08.584990    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:05:08.584997    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:05:08.585009    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:05:08.585019    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:05:10.587075    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Attempt 23
	I0815 17:05:10.587088    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:05:10.587172    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6499
	I0815 17:05:10.587970    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Searching for f2:99:76:f9:88:b in /var/db/dhcpd_leases ...
	I0815 17:05:10.588025    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:05:10.588036    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66be978c}
	I0815 17:05:10.588046    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:05:10.588052    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:05:10.588059    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:05:10.588066    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:05:10.588072    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:05:10.588079    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:05:10.588091    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:05:10.588099    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:05:10.588106    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:05:10.588114    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:05:10.588138    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:05:10.588151    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:05:10.588167    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:05:10.588182    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:05:10.588191    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:05:10.588199    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:05:10.588206    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:05:10.588218    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:05:10.588226    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:05:10.588233    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:05:10.588243    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:05:10.588248    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:05:12.589402    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Attempt 24
	I0815 17:05:12.589416    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:05:12.589480    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6499
	I0815 17:05:12.590253    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Searching for f2:99:76:f9:88:b in /var/db/dhcpd_leases ...
	I0815 17:05:12.590298    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:05:12.590315    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66be978c}
	I0815 17:05:12.590341    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:05:12.590356    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:05:12.590364    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:05:12.590371    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:05:12.590384    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:05:12.590398    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:05:12.590406    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:05:12.590414    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:05:12.590422    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:05:12.590430    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:05:12.590438    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:05:12.590446    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:05:12.590453    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:05:12.590464    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:05:12.590474    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:05:12.590483    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:05:12.590490    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:05:12.590498    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:05:12.590508    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:05:12.590516    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:05:12.590523    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:05:12.590532    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:05:14.591968    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Attempt 25
	I0815 17:05:14.591979    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:05:14.592035    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6499
	I0815 17:05:14.592835    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Searching for f2:99:76:f9:88:b in /var/db/dhcpd_leases ...
	I0815 17:05:14.592858    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:05:14.592873    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66be978c}
	I0815 17:05:14.592881    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:05:14.592888    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:05:14.592905    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:05:14.592915    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:05:14.592923    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:05:14.592930    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:05:14.592938    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:05:14.592945    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:05:14.592952    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:05:14.592963    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:05:14.592972    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:05:14.593000    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:05:14.593017    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:05:14.593030    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:05:14.593039    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:05:14.593047    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:05:14.593063    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:05:14.593073    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:05:14.593082    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:05:14.593090    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:05:14.593098    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:05:14.593105    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:05:16.593431    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Attempt 26
	I0815 17:05:16.593443    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:05:16.593500    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6499
	I0815 17:05:16.594342    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Searching for f2:99:76:f9:88:b in /var/db/dhcpd_leases ...
	I0815 17:05:16.594384    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:05:16.594402    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66be978c}
	I0815 17:05:16.594418    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:05:16.594439    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:05:16.594450    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:05:16.594465    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:05:16.594476    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:05:16.594486    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:05:16.594495    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:05:16.594503    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:05:16.594511    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:05:16.594526    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:05:16.594538    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:05:16.594557    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:05:16.594569    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:05:16.594584    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:05:16.594594    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:05:16.594614    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:05:16.594629    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:05:16.594637    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:05:16.594646    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:05:16.594654    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:05:16.594663    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:05:16.594678    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:05:18.594850    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Attempt 27
	I0815 17:05:18.594869    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:05:18.594904    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6499
	I0815 17:05:18.595789    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Searching for f2:99:76:f9:88:b in /var/db/dhcpd_leases ...
	I0815 17:05:18.595829    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:05:18.595838    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66be978c}
	I0815 17:05:18.595847    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:05:18.595856    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:05:18.595864    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:05:18.595871    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:05:18.595877    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:05:18.595883    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:05:18.595892    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:05:18.595899    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:05:18.595908    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:05:18.595924    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:05:18.595934    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:05:18.595942    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:05:18.595950    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:05:18.595957    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:05:18.595963    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:05:18.595969    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:05:18.595977    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:05:18.595985    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:05:18.595993    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:05:18.595999    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:05:18.596006    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:05:18.596014    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:05:20.596444    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Attempt 28
	I0815 17:05:20.596458    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:05:20.596539    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6499
	I0815 17:05:20.597354    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Searching for f2:99:76:f9:88:b in /var/db/dhcpd_leases ...
	I0815 17:05:20.597398    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:05:20.597408    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66be978c}
	I0815 17:05:20.597419    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:05:20.597429    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:05:20.597437    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:05:20.597445    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:05:20.597452    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:05:20.597470    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:05:20.597488    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:05:20.597502    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:05:20.597518    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:05:20.597527    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:05:20.597536    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:05:20.597549    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:05:20.597563    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:05:20.597574    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:05:20.597582    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:05:20.597591    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:05:20.597607    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:05:20.597619    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:05:20.597634    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:05:20.597648    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:05:20.597656    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:05:20.597662    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:05:22.599626    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Attempt 29
	I0815 17:05:22.599639    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 17:05:22.599681    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | hyperkit pid from json: 6499
	I0815 17:05:22.600677    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Searching for f2:99:76:f9:88:b in /var/db/dhcpd_leases ...
	I0815 17:05:22.600728    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:05:22.600742    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66be978c}
	I0815 17:05:22.600755    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:05:22.600763    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:05:22.600781    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:05:22.600788    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:05:22.600795    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:05:22.600803    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:05:22.600810    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:05:22.600818    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:05:22.600825    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:05:22.600833    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:05:22.600841    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:05:22.600850    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:05:22.600858    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:05:22.600865    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:05:22.600873    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:05:22.600881    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:05:22.600897    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:05:22.600910    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:05:22.600926    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:05:22.600936    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:05:22.600943    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:05:22.600952    6445 main.go:141] libmachine: (force-systemd-flag-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:05:24.602984    6445 client.go:171] duration metric: took 1m0.986537237s to LocalClient.Create
	I0815 17:05:26.603465    6445 start.go:128] duration metric: took 1m3.020787214s to createHost
	I0815 17:05:26.603495    6445 start.go:83] releasing machines lock for "force-systemd-flag-182000", held for 1m3.020915072s
	W0815 17:05:26.603560    6445 out.go:270] * Failed to start hyperkit VM. Running "minikube delete -p force-systemd-flag-182000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for f2:99:76:f9:88:b
	* Failed to start hyperkit VM. Running "minikube delete -p force-systemd-flag-182000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for f2:99:76:f9:88:b
	I0815 17:05:26.625231    6445 out.go:201] 
	W0815 17:05:26.687710    6445 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for f2:99:76:f9:88:b
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for f2:99:76:f9:88:b
	W0815 17:05:26.687721    6445 out.go:270] * 
	* 
	W0815 17:05:26.688371    6445 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 17:05:26.749621    6445 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-flag-182000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-182000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-flag-182000 ssh "docker info --format {{.CgroupDriver}}": exit status 50 (167.746487ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node force-systemd-flag-182000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-flag-182000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 50
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-08-15 17:05:27.071229 -0700 PDT m=+3633.997250964
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-182000 -n force-systemd-flag-182000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-182000 -n force-systemd-flag-182000: exit status 7 (80.072416ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 17:05:27.149333    6551 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0815 17:05:27.149353    6551 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-182000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "force-systemd-flag-182000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-182000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-182000: (5.238233624s)
--- FAIL: TestForceSystemdFlag (147.17s)

                                                
                                    
x
+
TestForceSystemdEnv (200.69s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-331000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit 
E0815 17:05:52.554881    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/skaffold-329000/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-env-331000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit : exit status 80 (3m15.102977945s)

                                                
                                                
-- stdout --
	* [force-systemd-env-331000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-977/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-977/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the hyperkit driver based on user configuration
	* Downloading driver docker-machine-driver-hyperkit:
	* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:
	
	    $ sudo chown root:wheel /Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit 
	    $ sudo chmod u+s /Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit 
	
	
	* Starting "force-systemd-env-331000" primary control-plane node in "force-systemd-env-331000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "force-systemd-env-331000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 17:05:45.893085    6590 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:05:45.893251    6590 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:05:45.893256    6590 out.go:358] Setting ErrFile to fd 2...
	I0815 17:05:45.893259    6590 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:05:45.893454    6590 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-977/.minikube/bin
	I0815 17:05:45.894903    6590 out.go:352] Setting JSON to false
	I0815 17:05:45.917619    6590 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":3916,"bootTime":1723762829,"procs":434,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0815 17:05:45.917712    6590 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 17:05:45.939723    6590 out.go:177] * [force-systemd-env-331000] minikube v1.33.1 on Darwin 14.6.1
	I0815 17:05:45.983271    6590 notify.go:220] Checking for updates...
	I0815 17:05:46.005360    6590 out.go:177]   - MINIKUBE_LOCATION=19452
	I0815 17:05:46.047874    6590 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19452-977/kubeconfig
	I0815 17:05:46.090018    6590 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0815 17:05:46.110915    6590 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 17:05:46.132269    6590 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-977/.minikube
	I0815 17:05:46.180241    6590 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0815 17:05:46.201349    6590 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 17:05:46.231972    6590 out.go:177] * Using the hyperkit driver based on user configuration
	I0815 17:05:46.273823    6590 start.go:297] selected driver: hyperkit
	I0815 17:05:46.273851    6590 start.go:901] validating driver "hyperkit" against <nil>
	I0815 17:05:46.273874    6590 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 17:05:46.278429    6590 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:05:48.288067    6590 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19452-977/.minikube/bin:/Users/jenkins/workspace/testdata/hyperkit-driver-without-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	W0815 17:05:48.301289    6590 install.go:62] docker-machine-driver-hyperkit: exit status 1
	I0815 17:05:48.322654    6590 out.go:177] * Downloading driver docker-machine-driver-hyperkit:
	I0815 17:05:48.364718    6590 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.33.1/docker-machine-driver-hyperkit-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.33.1/docker-machine-driver-hyperkit-amd64.sha256 -> /Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit
	I0815 17:05:48.761683    6590 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.33.1/docker-machine-driver-hyperkit-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.33.1/docker-machine-driver-hyperkit-amd64.sha256 Dst:/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x13858840 0x13858840 0x13858840 0x13858840 0x13858840 0x13858840 0x13858840] Decompressors:map[bz2:0xc0008883d8 gz:0xc000888460 tar:0xc000888410 tar.bz2:0xc000888420 tar.gz:0xc000888430 tar.xz:0xc000888440 tar.zst:0xc000888450 tbz2:0xc000888420 tgz:0xc000888430 txz:0xc000888440 tzst:0xc000888450 xz:0xc000888468 zip:0xc000888470 zst:0xc000888480] Getters:map[file:0xc00140abe0 http:0xc00084d2c0 https:0xc00084d540] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error
downloading checksum file: bad response code: 404. trying to get the common version
	I0815 17:05:48.761757    6590 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.33.1/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.33.1/docker-machine-driver-hyperkit.sha256 -> /Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit
	I0815 17:05:50.867014    6590 install.go:79] stdout: 
	I0815 17:05:50.900800    6590 out.go:177] * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:
	
	    $ sudo chown root:wheel /Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit 
	    $ sudo chmod u+s /Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit 
	
	
	I0815 17:05:50.924713    6590 install.go:99] testing: [sudo -n chown root:wheel /Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit]
	I0815 17:05:50.941830    6590 install.go:106] running: [sudo chown root:wheel /Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit]
	I0815 17:05:50.956819    6590 install.go:99] testing: [sudo -n chmod u+s /Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit]
	I0815 17:05:50.970984    6590 install.go:106] running: [sudo chmod u+s /Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit]
	I0815 17:05:50.985318    6590 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 17:05:50.985589    6590 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0815 17:05:50.985616    6590 cni.go:84] Creating CNI manager for ""
	I0815 17:05:50.985632    6590 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0815 17:05:50.985636    6590 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0815 17:05:50.985697    6590 start.go:340] cluster config:
	{Name:force-systemd-env-331000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-331000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:05:50.985795    6590 iso.go:125] acquiring lock: {Name:mkb93fc66b60be15dffa9eb2999a4be8d8d4d218 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:05:51.028404    6590 out.go:177] * Starting "force-systemd-env-331000" primary control-plane node in "force-systemd-env-331000" cluster
	I0815 17:05:51.049386    6590 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 17:05:51.049427    6590 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19452-977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0815 17:05:51.049446    6590 cache.go:56] Caching tarball of preloaded images
	I0815 17:05:51.049575    6590 preload.go:172] Found /Users/jenkins/minikube-integration/19452-977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0815 17:05:51.049596    6590 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 17:05:51.049882    6590 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/force-systemd-env-331000/config.json ...
	I0815 17:05:51.049933    6590 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/force-systemd-env-331000/config.json: {Name:mkd208d70e38bb440c2deb31b6a5ead0f4696553 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:05:51.050309    6590 start.go:360] acquireMachinesLock for force-systemd-env-331000: {Name:mkac8034c8a60617271f2754a03dcaf74fc4fef7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 17:05:51.050402    6590 start.go:364] duration metric: took 69.693µs to acquireMachinesLock for "force-systemd-env-331000"
	I0815 17:05:51.050431    6590 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-331000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-331000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 17:05:51.050470    6590 start.go:125] createHost starting for "" (driver="hyperkit")
	I0815 17:05:51.093345    6590 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0815 17:05:51.093515    6590 main.go:141] libmachine: Found binary path at /Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit
	I0815 17:05:51.093553    6590 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 17:05:52.178147    6590 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:54650
	I0815 17:05:52.178550    6590 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:05:52.178968    6590 main.go:141] libmachine: Using API Version  1
	I0815 17:05:52.178979    6590 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:05:52.179178    6590 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:05:52.179360    6590 main.go:141] libmachine: (force-systemd-env-331000) Calling .GetMachineName
	I0815 17:05:52.179491    6590 main.go:141] libmachine: (force-systemd-env-331000) Calling .DriverName
	I0815 17:05:52.179611    6590 start.go:159] libmachine.API.Create for "force-systemd-env-331000" (driver="hyperkit")
	I0815 17:05:52.179640    6590 client.go:168] LocalClient.Create starting
	I0815 17:05:52.179670    6590 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem
	I0815 17:05:52.179721    6590 main.go:141] libmachine: Decoding PEM data...
	I0815 17:05:52.179741    6590 main.go:141] libmachine: Parsing certificate...
	I0815 17:05:52.179793    6590 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem
	I0815 17:05:52.179831    6590 main.go:141] libmachine: Decoding PEM data...
	I0815 17:05:52.179843    6590 main.go:141] libmachine: Parsing certificate...
	I0815 17:05:52.179857    6590 main.go:141] libmachine: Running pre-create checks...
	I0815 17:05:52.179867    6590 main.go:141] libmachine: (force-systemd-env-331000) Calling .PreCreateCheck
	I0815 17:05:52.179965    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:05:52.180106    6590 main.go:141] libmachine: (force-systemd-env-331000) Calling .GetConfigRaw
	I0815 17:05:52.180539    6590 main.go:141] libmachine: Creating machine...
	I0815 17:05:52.180548    6590 main.go:141] libmachine: (force-systemd-env-331000) Calling .Create
	I0815 17:05:52.180624    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:05:52.180730    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | I0815 17:05:52.180615    6619 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19452-977/.minikube
	I0815 17:05:52.180789    6590 main.go:141] libmachine: (force-systemd-env-331000) Downloading /Users/jenkins/minikube-integration/19452-977/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19452-977/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0815 17:05:52.368691    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | I0815 17:05:52.368628    6619 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-env-331000/id_rsa...
	I0815 17:05:52.419219    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | I0815 17:05:52.419149    6619 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-env-331000/force-systemd-env-331000.rawdisk...
	I0815 17:05:52.419231    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Writing magic tar header
	I0815 17:05:52.419240    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Writing SSH key tar header
	I0815 17:05:52.419652    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | I0815 17:05:52.419611    6619 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-env-331000 ...
	I0815 17:05:52.798163    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:05:52.798182    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-env-331000/hyperkit.pid
	I0815 17:05:52.798208    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Using UUID 246609c0-8427-49ab-af6c-c148d7c031f9
	I0815 17:05:52.834712    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Generated MAC 8a:7b:e6:36:f1:40
	I0815 17:05:52.834729    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-331000
	I0815 17:05:52.834767    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | 2024/08/15 17:05:52 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-env-331000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"246609c0-8427-49ab-af6c-c148d7c031f9", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001147e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-env-331000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-env-331000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-env-331000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(
nil), CmdLine:"", process:(*os.Process)(nil)}
	I0815 17:05:52.834799    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | 2024/08/15 17:05:52 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-env-331000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"246609c0-8427-49ab-af6c-c148d7c031f9", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001147e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-env-331000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-env-331000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-env-331000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(
nil), CmdLine:"", process:(*os.Process)(nil)}
	I0815 17:05:52.834870    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | 2024/08/15 17:05:52 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-env-331000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "246609c0-8427-49ab-af6c-c148d7c031f9", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-env-331000/force-systemd-env-331000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-env-331000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-env-331000/tty,log=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-env-331000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-e
nv-331000/bzimage,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-env-331000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-331000"}
	I0815 17:05:52.834914    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | 2024/08/15 17:05:52 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-env-331000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 246609c0-8427-49ab-af6c-c148d7c031f9 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-env-331000/force-systemd-env-331000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-env-331000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-env-331000/tty,log=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-env-331000/console-ring -f kexec,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-env-331000/bzimage,/Users/jenkins/minikube-integration/19452-97
7/.minikube/machines/force-systemd-env-331000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-331000"
	I0815 17:05:52.834939    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | 2024/08/15 17:05:52 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0815 17:05:52.837866    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | 2024/08/15 17:05:52 DEBUG: hyperkit: Pid is 6620
	I0815 17:05:52.838284    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Attempt 0
	I0815 17:05:52.838310    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:05:52.838394    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6620
	I0815 17:05:52.839341    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Searching for 8a:7b:e6:36:f1:40 in /var/db/dhcpd_leases ...
	I0815 17:05:52.839400    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:05:52.839412    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:05:52.839425    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:05:52.839449    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:05:52.839463    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:05:52.839491    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:05:52.839513    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:05:52.839529    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:05:52.839564    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:05:52.839582    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:05:52.839596    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:05:52.839623    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:05:52.839641    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:05:52.839658    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:05:52.839681    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:05:52.839696    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:05:52.839708    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:05:52.839723    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:05:52.839739    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:05:52.839752    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:05:52.839842    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:05:52.839871    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:05:52.839884    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:05:52.839905    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:05:52.845445    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | 2024/08/15 17:05:52 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0815 17:05:52.853715    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | 2024/08/15 17:05:52 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-env-331000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0815 17:05:52.854471    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | 2024/08/15 17:05:52 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0815 17:05:52.854494    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | 2024/08/15 17:05:52 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0815 17:05:52.854531    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | 2024/08/15 17:05:52 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0815 17:05:52.854551    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | 2024/08/15 17:05:52 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0815 17:05:53.230816    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | 2024/08/15 17:05:53 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0815 17:05:53.230830    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | 2024/08/15 17:05:53 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0815 17:05:53.345572    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | 2024/08/15 17:05:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0815 17:05:53.345589    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | 2024/08/15 17:05:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0815 17:05:53.345624    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | 2024/08/15 17:05:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0815 17:05:53.345645    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | 2024/08/15 17:05:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0815 17:05:53.346470    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | 2024/08/15 17:05:53 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0815 17:05:53.346483    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | 2024/08/15 17:05:53 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0815 17:05:54.839979    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Attempt 1
	I0815 17:05:54.839998    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:05:54.840114    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6620
	I0815 17:05:54.840918    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Searching for 8a:7b:e6:36:f1:40 in /var/db/dhcpd_leases ...
	I0815 17:05:54.840987    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:05:54.841005    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:05:54.841016    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:05:54.841025    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:05:54.841034    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:05:54.841043    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:05:54.841051    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:05:54.841058    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:05:54.841067    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:05:54.841075    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:05:54.841084    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:05:54.841102    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:05:54.841112    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:05:54.841120    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:05:54.841133    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:05:54.841147    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:05:54.841158    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:05:54.841176    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:05:54.841185    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:05:54.841193    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:05:54.841201    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:05:54.841208    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:05:54.841217    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:05:54.841226    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:05:56.843197    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Attempt 2
	I0815 17:05:56.843215    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:05:56.843327    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6620
	I0815 17:05:56.844099    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Searching for 8a:7b:e6:36:f1:40 in /var/db/dhcpd_leases ...
	I0815 17:05:56.844190    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:05:56.844200    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:05:56.844226    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:05:56.844238    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:05:56.844257    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:05:56.844270    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:05:56.844284    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:05:56.844295    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:05:56.844313    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:05:56.844326    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:05:56.844334    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:05:56.844343    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:05:56.844353    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:05:56.844366    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:05:56.844383    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:05:56.844396    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:05:56.844404    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:05:56.844411    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:05:56.844422    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:05:56.844434    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:05:56.844444    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:05:56.844453    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:05:56.844462    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:05:56.844473    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:05:58.771160    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | 2024/08/15 17:05:58 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0815 17:05:58.771303    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | 2024/08/15 17:05:58 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0815 17:05:58.771313    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | 2024/08/15 17:05:58 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0815 17:05:58.791096    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | 2024/08/15 17:05:58 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0815 17:05:58.845006    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Attempt 3
	I0815 17:05:58.845030    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:05:58.845183    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6620
	I0815 17:05:58.846617    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Searching for 8a:7b:e6:36:f1:40 in /var/db/dhcpd_leases ...
	I0815 17:05:58.846726    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:05:58.846751    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:05:58.846780    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:05:58.846800    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:05:58.846819    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:05:58.846832    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:05:58.846850    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:05:58.846863    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:05:58.846876    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:05:58.846895    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:05:58.846910    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:05:58.846936    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:05:58.846949    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:05:58.846983    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:05:58.846999    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:05:58.847009    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:05:58.847019    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:05:58.847029    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:05:58.847055    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:05:58.847066    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:05:58.847076    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:05:58.847097    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:05:58.847107    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:05:58.847117    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:06:00.849004    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Attempt 4
	I0815 17:06:00.849033    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:06:00.849163    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6620
	I0815 17:06:00.849978    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Searching for 8a:7b:e6:36:f1:40 in /var/db/dhcpd_leases ...
	I0815 17:06:00.850036    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:06:00.850058    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:06:00.850066    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:06:00.850076    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:06:00.850086    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:06:00.850093    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:06:00.850100    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:06:00.850109    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:06:00.850124    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:06:00.850136    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:06:00.850144    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:06:00.850153    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:06:00.850160    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:06:00.850169    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:06:00.850182    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:06:00.850193    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:06:00.850209    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:06:00.850220    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:06:00.850237    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:06:00.850249    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:06:00.850257    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:06:00.850272    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:06:00.850285    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:06:00.850296    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:06:02.852229    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Attempt 5
	I0815 17:06:02.852244    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:06:02.852410    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6620
	I0815 17:06:02.853274    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Searching for 8a:7b:e6:36:f1:40 in /var/db/dhcpd_leases ...
	I0815 17:06:02.853333    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:06:02.853344    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:06:02.853357    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:06:02.853366    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:06:02.853373    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:06:02.853382    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:06:02.853391    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:06:02.853398    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:06:02.853404    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:06:02.853412    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:06:02.853420    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:06:02.853428    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:06:02.853445    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:06:02.853459    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:06:02.853469    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:06:02.853477    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:06:02.853485    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:06:02.853495    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:06:02.853511    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:06:02.853519    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:06:02.853526    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:06:02.853533    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:06:02.853547    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:06:02.853555    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:06:04.854808    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Attempt 6
	I0815 17:06:04.854821    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:06:04.854904    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6620
	I0815 17:06:04.855659    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Searching for 8a:7b:e6:36:f1:40 in /var/db/dhcpd_leases ...
	I0815 17:06:04.855721    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:06:04.855736    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:06:04.855748    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:06:04.855758    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:06:04.855770    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:06:04.855782    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:06:04.855789    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:06:04.855798    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:06:04.855819    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:06:04.855832    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:06:04.855840    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:06:04.855848    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:06:04.855864    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:06:04.855874    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:06:04.855882    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:06:04.855890    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:06:04.855899    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:06:04.855908    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:06:04.855931    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:06:04.855944    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:06:04.855951    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:06:04.855960    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:06:04.855967    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:06:04.855977    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:06:06.857933    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Attempt 7
	I0815 17:06:06.857948    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:06:06.858073    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6620
	I0815 17:06:06.858833    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Searching for 8a:7b:e6:36:f1:40 in /var/db/dhcpd_leases ...
	I0815 17:06:06.858892    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:06:06.858902    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:06:06.858913    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:06:06.858925    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:06:06.858940    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:06:06.858948    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:06:06.858954    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:06:06.858972    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:06:06.858978    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:06:06.858984    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:06:06.858992    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:06:06.859001    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:06:06.859012    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:06:06.859020    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:06:06.859028    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:06:06.859035    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:06:06.859043    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:06:06.859050    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:06:06.859061    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:06:06.859070    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:06:06.859086    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:06:06.859098    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:06:06.859115    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:06:06.859127    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:06:08.859349    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Attempt 8
	I0815 17:06:08.859362    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:06:08.859444    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6620
	I0815 17:06:08.860211    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Searching for 8a:7b:e6:36:f1:40 in /var/db/dhcpd_leases ...
	I0815 17:06:08.860264    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:06:08.860273    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:06:08.860282    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:06:08.860289    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:06:08.860297    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:06:08.860303    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:06:08.860310    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:06:08.860319    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:06:08.860332    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:06:08.860338    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:06:08.860344    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:06:08.860353    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:06:08.860375    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:06:08.860389    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:06:08.860397    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:06:08.860412    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:06:08.860420    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:06:08.860426    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:06:08.860438    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:06:08.860448    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:06:08.860464    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:06:08.860478    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:06:08.860486    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:06:08.860495    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:06:10.861456    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Attempt 9
	I0815 17:06:10.861473    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:06:10.861514    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6620
	I0815 17:06:10.862357    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Searching for 8a:7b:e6:36:f1:40 in /var/db/dhcpd_leases ...
	I0815 17:06:10.862408    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:06:10.862422    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:06:10.862434    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:06:10.862442    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:06:10.862460    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:06:10.862469    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:06:10.862482    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:06:10.862493    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:06:10.862501    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:06:10.862513    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:06:10.862526    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:06:10.862534    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:06:10.862543    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:06:10.862550    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:06:10.862557    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:06:10.862564    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:06:10.862572    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:06:10.862580    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:06:10.862587    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:06:10.862595    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:06:10.862603    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:06:10.862619    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:06:10.862629    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:06:10.862643    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:06:12.862750    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Attempt 10
	I0815 17:06:12.862764    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:06:12.862844    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6620
	I0815 17:06:12.863625    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Searching for 8a:7b:e6:36:f1:40 in /var/db/dhcpd_leases ...
	I0815 17:06:12.863683    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:06:12.863697    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:06:12.863706    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:06:12.863714    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:06:12.863721    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:06:12.863731    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:06:12.863751    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:06:12.863773    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:06:12.863782    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:06:12.863792    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:06:12.863809    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:06:12.863821    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:06:12.863831    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:06:12.863848    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:06:12.863861    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:06:12.863874    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:06:12.863882    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:06:12.863891    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:06:12.863898    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:06:12.863904    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:06:12.863911    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:06:12.863919    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:06:12.863940    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:06:12.863954    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:06:14.865862    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Attempt 11
	I0815 17:06:14.865875    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:06:14.865920    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6620
	I0815 17:06:14.866814    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Searching for 8a:7b:e6:36:f1:40 in /var/db/dhcpd_leases ...
	I0815 17:06:14.866856    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:06:14.866877    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:06:14.866886    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:06:14.866895    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:06:14.866903    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:06:14.866911    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:06:14.866917    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:06:14.866931    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:06:14.866944    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:06:14.866952    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:06:14.866958    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:06:14.866966    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:06:14.866981    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:06:14.866989    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:06:14.866996    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:06:14.867002    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:06:14.867011    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:06:14.867018    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:06:14.867026    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:06:14.867035    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:06:14.867044    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:06:14.867051    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:06:14.867059    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:06:14.867075    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:06:16.868496    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Attempt 12
	I0815 17:06:16.868509    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:06:16.868585    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6620
	I0815 17:06:16.869557    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Searching for 8a:7b:e6:36:f1:40 in /var/db/dhcpd_leases ...
	I0815 17:06:16.869620    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:06:16.869633    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:06:16.869644    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:06:16.869654    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:06:16.869660    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:06:16.869682    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:06:16.869692    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:06:16.869700    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:06:16.869712    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:06:16.869720    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:06:16.869728    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:06:16.869738    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:06:16.869747    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:06:16.869754    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:06:16.869762    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:06:16.869770    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:06:16.869777    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:06:16.869785    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:06:16.869792    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:06:16.869801    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:06:16.869810    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:06:16.869825    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:06:16.869846    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:06:16.869861    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:06:18.869910    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Attempt 13
	I0815 17:06:18.869929    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:06:18.869967    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6620
	I0815 17:06:18.870745    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Searching for 8a:7b:e6:36:f1:40 in /var/db/dhcpd_leases ...
	I0815 17:06:18.870794    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:06:18.870807    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:06:18.870830    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:06:18.870842    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:06:18.870850    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:06:18.870858    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:06:18.870865    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:06:18.870871    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:06:18.870878    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:06:18.870887    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:06:18.870893    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:06:18.870902    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:06:18.870911    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:06:18.870919    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:06:18.870927    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:06:18.870934    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:06:18.870942    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:06:18.870949    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:06:18.870956    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:06:18.870964    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:06:18.870980    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:06:18.870991    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:06:18.871012    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:06:18.871026    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:06:20.872026    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Attempt 14
	I0815 17:06:20.872041    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:06:20.872113    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6620
	I0815 17:06:20.872883    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Searching for 8a:7b:e6:36:f1:40 in /var/db/dhcpd_leases ...
	I0815 17:06:20.872928    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:06:20.872939    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:06:20.872947    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:06:20.872954    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:06:20.872961    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:06:20.872966    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:06:20.872981    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:06:20.872991    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:06:20.873007    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:06:20.873025    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:06:20.873042    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:06:20.873054    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:06:20.873064    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:06:20.873073    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:06:20.873080    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:06:20.873088    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:06:20.873096    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:06:20.873101    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:06:20.873108    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:06:20.873116    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:06:20.873123    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:06:20.873132    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:06:20.873141    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:06:20.873149    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:06:22.874682    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Attempt 15
	I0815 17:06:22.874695    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:06:22.874722    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6620
	I0815 17:06:22.875533    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Searching for 8a:7b:e6:36:f1:40 in /var/db/dhcpd_leases ...
	I0815 17:06:22.875586    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:06:22.875600    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:06:22.875617    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:06:22.875626    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:06:22.875633    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:06:22.875656    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:06:22.875666    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:06:22.875674    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:06:22.875695    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:06:22.875713    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:06:22.875726    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:06:22.875742    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:06:22.875755    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:06:22.875763    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:06:22.875771    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:06:22.875784    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:06:22.875794    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:06:22.875803    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:06:22.875815    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:06:22.875825    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:06:22.875833    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:06:22.875841    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:06:22.875850    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:06:22.875858    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:06:24.877809    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Attempt 16
	I0815 17:06:24.877822    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:06:24.877879    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6620
	I0815 17:06:24.878787    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Searching for 8a:7b:e6:36:f1:40 in /var/db/dhcpd_leases ...
	I0815 17:06:24.878887    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:06:24.878898    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:06:24.878908    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:06:24.878926    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:06:24.878934    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:06:24.878941    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:06:24.878962    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:06:24.878973    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:06:24.878980    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:06:24.879015    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:06:24.879025    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:06:24.879032    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:06:24.879041    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:06:24.879047    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:06:24.879055    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:06:24.879062    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:06:24.879068    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:06:24.879081    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:06:24.879094    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:06:24.879103    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:06:24.879110    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:06:24.879129    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:06:24.879138    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:06:24.879154    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:06:26.881138    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Attempt 17
	I0815 17:06:26.881154    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:06:26.881201    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6620
	I0815 17:06:26.882188    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Searching for 8a:7b:e6:36:f1:40 in /var/db/dhcpd_leases ...
	I0815 17:06:26.882245    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:06:26.882256    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:06:26.882265    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:06:26.882271    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:06:26.882283    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:06:26.882295    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:06:26.882302    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:06:26.882310    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:06:26.882317    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:06:26.882327    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:06:26.882335    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:06:26.882343    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:06:26.882359    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:06:26.882370    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:06:26.882379    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:06:26.882397    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:06:26.882404    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:06:26.882411    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:06:26.882424    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:06:26.882440    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:06:26.882452    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:06:26.882459    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:06:26.882466    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:06:26.882471    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:06:28.884476    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Attempt 18
	I0815 17:06:28.884492    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:06:28.884524    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6620
	I0815 17:06:28.885384    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Searching for 8a:7b:e6:36:f1:40 in /var/db/dhcpd_leases ...
	I0815 17:06:28.885437    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:06:28.885448    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:06:28.885457    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:06:28.885464    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:06:28.885476    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:06:28.885482    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:06:28.885489    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:06:28.885495    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:06:28.885504    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:06:28.885513    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:06:28.885520    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:06:28.885528    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:06:28.885536    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:06:28.885543    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:06:28.885550    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:06:28.885563    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:06:28.885581    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:06:28.885591    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:06:28.885609    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:06:28.885622    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:06:28.885637    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:06:28.885649    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:06:28.885658    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:06:28.885675    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:06:30.886911    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Attempt 19
	I0815 17:06:30.886928    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:06:30.886996    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6620
	I0815 17:06:30.887781    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Searching for 8a:7b:e6:36:f1:40 in /var/db/dhcpd_leases ...
	I0815 17:06:30.887838    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:06:30.887849    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:06:30.887862    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:06:30.887869    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:06:30.887876    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:06:30.887898    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:06:30.887906    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:06:30.887921    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:06:30.887937    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:06:30.887946    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:06:30.887963    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:06:30.887975    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:06:30.887986    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:06:30.887993    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:06:30.888000    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:06:30.888008    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:06:30.888019    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:06:30.888031    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:06:30.888039    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:06:30.888048    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:06:30.888054    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:06:30.888062    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:06:30.888070    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:06:30.888078    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:06:32.890048    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Attempt 20
	I0815 17:06:32.890062    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:06:32.890142    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6620
	I0815 17:06:32.890899    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Searching for 8a:7b:e6:36:f1:40 in /var/db/dhcpd_leases ...
	I0815 17:06:32.890963    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:06:32.890973    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:06:32.890981    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:06:32.890990    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:06:32.890999    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:06:32.891006    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:06:32.891029    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:06:32.891043    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:06:32.891057    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:06:32.891066    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:06:32.891074    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:06:32.891082    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:06:32.891094    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:06:32.891103    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:06:32.891128    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:06:32.891158    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:06:32.891165    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:06:32.891171    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:06:32.891182    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:06:32.891190    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:06:32.891204    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:06:32.891213    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:06:32.891220    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:06:32.891228    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:06:34.893176    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Attempt 21
	I0815 17:06:34.893190    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:06:34.893265    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6620
	I0815 17:06:34.894126    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Searching for 8a:7b:e6:36:f1:40 in /var/db/dhcpd_leases ...
	I0815 17:06:34.894195    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:06:34.894203    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:06:34.894211    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:06:34.894225    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:06:34.894233    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:06:34.894240    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:06:34.894257    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:06:34.894268    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:06:34.894276    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:06:34.894285    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:06:34.894295    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:06:34.894303    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:06:34.894315    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:06:34.894325    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:06:34.894333    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:06:34.894341    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:06:34.894356    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:06:34.894368    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:06:34.894378    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:06:34.894385    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:06:34.894393    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:06:34.894401    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:06:34.894407    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:06:34.894415    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:06:36.894428    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Attempt 22
	I0815 17:06:36.894441    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:06:36.894520    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6620
	I0815 17:06:36.895368    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Searching for 8a:7b:e6:36:f1:40 in /var/db/dhcpd_leases ...
	I0815 17:06:36.895392    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:06:36.895403    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:06:36.895413    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:06:36.895420    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:06:36.895427    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:06:36.895434    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:06:36.895441    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:06:36.895448    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:06:36.895456    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:06:36.895473    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:06:36.895481    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:06:36.895489    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:06:36.895495    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:06:36.895502    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:06:36.895511    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:06:36.895517    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:06:36.895524    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:06:36.895531    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:06:36.895538    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:06:36.895545    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:06:36.895553    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:06:36.895560    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:06:36.895566    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:06:36.895583    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:06:38.895650    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Attempt 23
	I0815 17:06:38.895666    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:06:38.895713    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6620
	I0815 17:06:38.896497    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Searching for 8a:7b:e6:36:f1:40 in /var/db/dhcpd_leases ...
	I0815 17:06:38.896550    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:06:38.896560    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:06:38.896573    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:06:38.896583    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:06:38.896602    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:06:38.896611    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:06:38.896630    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:06:38.896640    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:06:38.896651    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:06:38.896661    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:06:38.896669    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:06:38.896677    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:06:38.896684    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:06:38.896691    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:06:38.896700    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:06:38.896709    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:06:38.896716    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:06:38.896724    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:06:38.896749    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:06:38.896777    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:06:38.896813    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:06:38.896823    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:06:38.896836    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:06:38.896844    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:06:40.897569    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Attempt 24
	I0815 17:06:40.897581    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:06:40.897656    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6620
	I0815 17:06:40.898478    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Searching for 8a:7b:e6:36:f1:40 in /var/db/dhcpd_leases ...
	I0815 17:06:40.898487    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:06:40.898497    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:06:40.898508    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:06:40.898545    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:06:40.898559    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:06:40.898575    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:06:40.898588    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:06:40.898597    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:06:40.898606    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:06:40.898615    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:06:40.898623    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:06:40.898630    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:06:40.898637    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:06:40.898653    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:06:40.898664    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:06:40.898672    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:06:40.898681    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:06:40.898689    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:06:40.898696    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:06:40.898705    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:06:40.898712    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:06:40.898727    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:06:40.898735    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:06:40.898743    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:06:42.900711    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Attempt 25
	I0815 17:06:42.900727    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:06:42.900784    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6620
	I0815 17:06:42.901566    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Searching for 8a:7b:e6:36:f1:40 in /var/db/dhcpd_leases ...
	I0815 17:06:42.901609    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:06:42.901630    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:06:42.901651    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:06:42.901665    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:06:42.901673    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:06:42.901679    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:06:42.901687    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:06:42.901694    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:06:42.901700    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:06:42.901706    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:06:42.901728    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:06:42.901740    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:06:42.901760    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:06:42.901771    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:06:42.901780    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:06:42.901786    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:06:42.901794    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:06:42.901802    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:06:42.901813    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:06:42.901821    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:06:42.901830    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:06:42.901838    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:06:42.901846    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:06:42.901854    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:06:44.903853    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Attempt 26
	I0815 17:06:44.903870    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:06:44.903926    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6620
	I0815 17:06:44.904763    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Searching for 8a:7b:e6:36:f1:40 in /var/db/dhcpd_leases ...
	I0815 17:06:44.904799    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:06:44.904810    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:06:44.904819    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:06:44.904830    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:06:44.904839    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:06:44.904846    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:06:44.904852    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:06:44.904861    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:06:44.904874    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:06:44.904888    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:06:44.904898    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:06:44.904910    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:06:44.904921    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:06:44.904929    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:06:44.904936    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:06:44.904946    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:06:44.904953    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:06:44.904961    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:06:44.904968    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:06:44.904976    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:06:44.904984    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:06:44.904989    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:06:44.904995    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:06:44.905005    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:06:46.907070    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Attempt 27
	I0815 17:06:46.907087    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:06:46.907138    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6620
	I0815 17:06:46.907914    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Searching for 8a:7b:e6:36:f1:40 in /var/db/dhcpd_leases ...
	I0815 17:06:46.907965    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:06:46.907972    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:06:46.907980    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:06:46.907985    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:06:46.907992    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:06:46.907997    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:06:46.908004    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:06:46.908009    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:06:46.908035    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:06:46.908053    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:06:46.908076    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:06:46.908088    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:06:46.908096    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:06:46.908102    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:06:46.908109    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:06:46.908118    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:06:46.908127    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:06:46.908135    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:06:46.908143    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:06:46.908151    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:06:46.908158    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:06:46.908172    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:06:46.908179    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:06:46.908187    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:06:48.908874    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Attempt 28
	I0815 17:06:48.908890    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:06:48.908958    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6620
	I0815 17:06:48.909745    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Searching for 8a:7b:e6:36:f1:40 in /var/db/dhcpd_leases ...
	I0815 17:06:48.909808    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:06:48.909819    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:06:48.909832    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:06:48.909843    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:06:48.909851    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:06:48.909867    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:06:48.909876    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:06:48.909887    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:06:48.909894    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:06:48.909903    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:06:48.909911    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:06:48.909920    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:06:48.909926    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:06:48.909933    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:06:48.909941    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:06:48.909949    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:06:48.909957    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:06:48.909964    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:06:48.909969    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:06:48.909977    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:06:48.909983    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:06:48.909992    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:06:48.909999    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:06:48.910007    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:06:50.910849    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Attempt 29
	I0815 17:06:50.910863    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:06:50.910948    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6620
	I0815 17:06:50.911716    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Searching for 8a:7b:e6:36:f1:40 in /var/db/dhcpd_leases ...
	I0815 17:06:50.911812    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:06:50.911820    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:06:50.911829    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:06:50.911835    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:06:50.911842    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:06:50.911848    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:06:50.911863    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:06:50.911876    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:06:50.911884    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:06:50.911892    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:06:50.911900    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:06:50.911907    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:06:50.911916    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:06:50.911924    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:06:50.911931    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:06:50.911937    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:06:50.911944    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:06:50.911951    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:06:50.911958    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:06:50.911971    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:06:50.911984    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:06:50.911993    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:06:50.912007    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:06:50.912019    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:06:52.912497    6590 client.go:171] duration metric: took 1m0.731660625s to LocalClient.Create
	I0815 17:06:54.914688    6590 start.go:128] duration metric: took 1m3.862932934s to createHost
	I0815 17:06:54.914701    6590 start.go:83] releasing machines lock for "force-systemd-env-331000", held for 1m3.863044688s
	W0815 17:06:54.914715    6590 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 8a:7b:e6:36:f1:40
	I0815 17:06:54.915076    6590 main.go:141] libmachine: Found binary path at /Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit
	I0815 17:06:54.915103    6590 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 17:06:54.923734    6590 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:54661
	I0815 17:06:54.924074    6590 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:06:54.924402    6590 main.go:141] libmachine: Using API Version  1
	I0815 17:06:54.924413    6590 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:06:54.924646    6590 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:06:54.925006    6590 main.go:141] libmachine: Found binary path at /Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit
	I0815 17:06:54.925037    6590 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 17:06:54.933444    6590 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:54663
	I0815 17:06:54.933795    6590 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:06:54.934133    6590 main.go:141] libmachine: Using API Version  1
	I0815 17:06:54.934145    6590 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:06:54.934353    6590 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:06:54.934484    6590 main.go:141] libmachine: (force-systemd-env-331000) Calling .GetState
	I0815 17:06:54.934597    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:06:54.934650    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6620
	I0815 17:06:54.935598    6590 main.go:141] libmachine: (force-systemd-env-331000) Calling .DriverName
	I0815 17:06:54.978215    6590 out.go:177] * Deleting "force-systemd-env-331000" in hyperkit ...
	I0815 17:06:55.020238    6590 main.go:141] libmachine: (force-systemd-env-331000) Calling .Remove
	I0815 17:06:55.020376    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:06:55.020386    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:06:55.020455    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6620
	I0815 17:06:55.021360    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:06:55.021429    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | waiting for graceful shutdown
	I0815 17:06:56.021978    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:06:56.022061    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6620
	I0815 17:06:56.022962    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | waiting for graceful shutdown
	I0815 17:06:57.024781    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:06:57.024855    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6620
	I0815 17:06:57.026631    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | waiting for graceful shutdown
	I0815 17:06:58.027506    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:06:58.027581    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6620
	I0815 17:06:58.028269    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | waiting for graceful shutdown
	I0815 17:06:59.029503    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:06:59.029597    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6620
	I0815 17:06:59.030151    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | waiting for graceful shutdown
	I0815 17:07:00.032252    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:07:00.032331    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6620
	I0815 17:07:00.033424    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | sending sigkill
	I0815 17:07:00.033435    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	W0815 17:07:00.045789    6590 out.go:270] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 8a:7b:e6:36:f1:40
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 8a:7b:e6:36:f1:40
	I0815 17:07:00.045807    6590 start.go:729] Will try again in 5 seconds ...
	I0815 17:07:00.056605    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | 2024/08/15 17:07:00 WARN : hyperkit: failed to read stderr: EOF
	I0815 17:07:00.056624    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | 2024/08/15 17:07:00 WARN : hyperkit: failed to read stdout: EOF
	I0815 17:07:05.046323    6590 start.go:360] acquireMachinesLock for force-systemd-env-331000: {Name:mkac8034c8a60617271f2754a03dcaf74fc4fef7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 17:07:57.830846    6590 start.go:364] duration metric: took 52.74800314s to acquireMachinesLock for "force-systemd-env-331000"
	I0815 17:07:57.830878    6590 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-331000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-331000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 17:07:57.830947    6590 start.go:125] createHost starting for "" (driver="hyperkit")
	I0815 17:07:57.852323    6590 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0815 17:07:57.852408    6590 main.go:141] libmachine: Found binary path at /Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit
	I0815 17:07:57.852433    6590 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 17:07:57.860969    6590 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:54671
	I0815 17:07:57.861319    6590 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:57.861671    6590 main.go:141] libmachine: Using API Version  1
	I0815 17:07:57.861681    6590 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:57.861890    6590 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:57.862011    6590 main.go:141] libmachine: (force-systemd-env-331000) Calling .GetMachineName
	I0815 17:07:57.862099    6590 main.go:141] libmachine: (force-systemd-env-331000) Calling .DriverName
	I0815 17:07:57.862203    6590 start.go:159] libmachine.API.Create for "force-systemd-env-331000" (driver="hyperkit")
	I0815 17:07:57.862228    6590 client.go:168] LocalClient.Create starting
	I0815 17:07:57.862254    6590 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem
	I0815 17:07:57.862303    6590 main.go:141] libmachine: Decoding PEM data...
	I0815 17:07:57.862317    6590 main.go:141] libmachine: Parsing certificate...
	I0815 17:07:57.862367    6590 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem
	I0815 17:07:57.862406    6590 main.go:141] libmachine: Decoding PEM data...
	I0815 17:07:57.862424    6590 main.go:141] libmachine: Parsing certificate...
	I0815 17:07:57.862436    6590 main.go:141] libmachine: Running pre-create checks...
	I0815 17:07:57.862441    6590 main.go:141] libmachine: (force-systemd-env-331000) Calling .PreCreateCheck
	I0815 17:07:57.862519    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:07:57.862544    6590 main.go:141] libmachine: (force-systemd-env-331000) Calling .GetConfigRaw
	I0815 17:07:57.894054    6590 main.go:141] libmachine: Creating machine...
	I0815 17:07:57.894063    6590 main.go:141] libmachine: (force-systemd-env-331000) Calling .Create
	I0815 17:07:57.894159    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:07:57.894286    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | I0815 17:07:57.894145    6673 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19452-977/.minikube
	I0815 17:07:57.894343    6590 main.go:141] libmachine: (force-systemd-env-331000) Downloading /Users/jenkins/minikube-integration/19452-977/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19452-977/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0815 17:07:58.099479    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | I0815 17:07:58.099418    6673 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-env-331000/id_rsa...
	I0815 17:07:58.210033    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | I0815 17:07:58.209940    6673 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-env-331000/force-systemd-env-331000.rawdisk...
	I0815 17:07:58.210055    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Writing magic tar header
	I0815 17:07:58.210082    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Writing SSH key tar header
	I0815 17:07:58.210634    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | I0815 17:07:58.210590    6673 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-env-331000 ...
	I0815 17:07:58.586143    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:07:58.586161    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-env-331000/hyperkit.pid
	I0815 17:07:58.586194    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Using UUID ce6d7885-3832-462d-a61a-3d8ae6546c5c
	I0815 17:07:58.611266    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Generated MAC a:50:e4:41:dd:ff
	I0815 17:07:58.611285    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-331000
	I0815 17:07:58.611320    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | 2024/08/15 17:07:58 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-env-331000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ce6d7885-3832-462d-a61a-3d8ae6546c5c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e0270)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-env-331000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-env-331000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-env-331000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(
nil), CmdLine:"", process:(*os.Process)(nil)}
	I0815 17:07:58.611348    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | 2024/08/15 17:07:58 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-env-331000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ce6d7885-3832-462d-a61a-3d8ae6546c5c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e0270)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-env-331000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-env-331000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-env-331000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(
nil), CmdLine:"", process:(*os.Process)(nil)}
	I0815 17:07:58.611396    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | 2024/08/15 17:07:58 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-env-331000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "ce6d7885-3832-462d-a61a-3d8ae6546c5c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-env-331000/force-systemd-env-331000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-env-331000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-env-331000/tty,log=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-env-331000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-e
nv-331000/bzimage,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-env-331000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-331000"}
	I0815 17:07:58.611445    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | 2024/08/15 17:07:58 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-env-331000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U ce6d7885-3832-462d-a61a-3d8ae6546c5c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-env-331000/force-systemd-env-331000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-env-331000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-env-331000/tty,log=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-env-331000/console-ring -f kexec,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-env-331000/bzimage,/Users/jenkins/minikube-integration/19452-97
7/.minikube/machines/force-systemd-env-331000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-331000"
	I0815 17:07:58.611461    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | 2024/08/15 17:07:58 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0815 17:07:58.614455    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | 2024/08/15 17:07:58 DEBUG: hyperkit: Pid is 6676
	I0815 17:07:58.614926    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Attempt 0
	I0815 17:07:58.614939    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:07:58.614986    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6676
	I0815 17:07:58.615951    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Searching for a:50:e4:41:dd:ff in /var/db/dhcpd_leases ...
	I0815 17:07:58.616043    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:07:58.616061    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:07:58.616114    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:07:58.616127    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:07:58.616135    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:07:58.616142    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:07:58.616150    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:07:58.616156    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:07:58.616169    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:07:58.616182    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:07:58.616205    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:07:58.616216    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:07:58.616237    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:07:58.616260    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:07:58.616278    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:07:58.616295    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:07:58.616316    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:07:58.616334    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:07:58.616360    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:07:58.616375    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:07:58.616402    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:07:58.616422    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:07:58.616439    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:07:58.616453    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:07:58.622624    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | 2024/08/15 17:07:58 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0815 17:07:58.630747    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | 2024/08/15 17:07:58 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/force-systemd-env-331000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0815 17:07:58.631623    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | 2024/08/15 17:07:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0815 17:07:58.631646    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | 2024/08/15 17:07:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0815 17:07:58.631668    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | 2024/08/15 17:07:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0815 17:07:58.631689    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | 2024/08/15 17:07:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0815 17:07:59.010399    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | 2024/08/15 17:07:59 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0815 17:07:59.010414    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | 2024/08/15 17:07:59 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0815 17:07:59.125215    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | 2024/08/15 17:07:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0815 17:07:59.125232    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | 2024/08/15 17:07:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0815 17:07:59.125243    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | 2024/08/15 17:07:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0815 17:07:59.125276    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | 2024/08/15 17:07:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0815 17:07:59.126120    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | 2024/08/15 17:07:59 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0815 17:07:59.126130    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | 2024/08/15 17:07:59 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0815 17:08:00.618273    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Attempt 1
	I0815 17:08:00.618287    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:08:00.618383    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6676
	I0815 17:08:00.619195    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Searching for a:50:e4:41:dd:ff in /var/db/dhcpd_leases ...
	I0815 17:08:00.619270    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:08:00.619284    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:08:00.619304    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:08:00.619312    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:08:00.619319    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:08:00.619326    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:08:00.619334    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:08:00.619344    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:08:00.619356    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:08:00.619363    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:08:00.619371    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:08:00.619379    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:08:00.619386    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:08:00.619394    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:08:00.619407    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:08:00.619418    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:08:00.619427    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:08:00.619434    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:08:00.619452    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:08:00.619465    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:08:00.619503    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:08:00.619517    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:08:00.619532    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:08:00.619552    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:08:02.621834    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Attempt 2
	I0815 17:08:02.621851    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:08:02.621942    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6676
	I0815 17:08:02.622793    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Searching for a:50:e4:41:dd:ff in /var/db/dhcpd_leases ...
	I0815 17:08:02.622841    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:08:02.622851    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:08:02.622862    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:08:02.622872    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:08:02.622879    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:08:02.622885    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:08:02.622905    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:08:02.622919    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:08:02.622931    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:08:02.622949    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:08:02.622962    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:08:02.622974    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:08:02.622981    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:08:02.622987    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:08:02.622994    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:08:02.623015    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:08:02.623027    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:08:02.623043    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:08:02.623055    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:08:02.623070    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:08:02.623083    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:08:02.623092    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:08:02.623103    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:08:02.623113    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:08:04.509346    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | 2024/08/15 17:08:04 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0815 17:08:04.509459    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | 2024/08/15 17:08:04 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0815 17:08:04.509494    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | 2024/08/15 17:08:04 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0815 17:08:04.530154    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | 2024/08/15 17:08:04 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0815 17:08:04.625858    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Attempt 3
	I0815 17:08:04.625886    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:08:04.626065    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6676
	I0815 17:08:04.627463    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Searching for a:50:e4:41:dd:ff in /var/db/dhcpd_leases ...
	I0815 17:08:04.627621    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:08:04.627643    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:08:04.627658    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:08:04.627685    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:08:04.627714    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:08:04.627741    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:08:04.627760    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:08:04.627777    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:08:04.627792    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:08:04.627808    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:08:04.627823    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:08:04.627839    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:08:04.627853    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:08:04.627883    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:08:04.627892    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:08:04.627910    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:08:04.627923    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:08:04.627932    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:08:04.627943    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:08:04.627954    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:08:04.627964    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:08:04.627982    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:08:04.627998    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:08:04.628010    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:08:06.630132    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Attempt 4
	I0815 17:08:06.630151    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:08:06.630242    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6676
	I0815 17:08:06.631022    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Searching for a:50:e4:41:dd:ff in /var/db/dhcpd_leases ...
	I0815 17:08:06.631097    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:08:06.631113    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:08:06.631130    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:08:06.631160    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:08:06.631173    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:08:06.631183    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:08:06.631191    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:08:06.631197    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:08:06.631218    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:08:06.631226    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:08:06.631247    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:08:06.631261    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:08:06.631271    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:08:06.631284    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:08:06.631303    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:08:06.631317    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:08:06.631327    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:08:06.631335    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:08:06.631342    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:08:06.631351    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:08:06.631358    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:08:06.631366    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:08:06.631374    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:08:06.631382    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:08:08.632829    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Attempt 5
	I0815 17:08:08.632851    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:08:08.632915    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6676
	I0815 17:08:08.633711    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Searching for a:50:e4:41:dd:ff in /var/db/dhcpd_leases ...
	I0815 17:08:08.633765    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:08:08.633777    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:08:08.633791    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:08:08.633799    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:08:08.633812    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:08:08.633822    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:08:08.633832    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:08:08.633839    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:08:08.633845    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:08:08.633852    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:08:08.633858    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:08:08.633867    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:08:08.633885    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:08:08.633898    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:08:08.633907    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:08:08.633915    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:08:08.633923    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:08:08.633936    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:08:08.633943    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:08:08.633951    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:08:08.633959    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:08:08.633967    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:08:08.633973    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:08:08.633982    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:08:10.636286    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Attempt 6
	I0815 17:08:10.636301    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:08:10.636375    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6676
	I0815 17:08:10.637206    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Searching for a:50:e4:41:dd:ff in /var/db/dhcpd_leases ...
	I0815 17:08:10.637265    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:08:10.637275    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:08:10.637285    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:08:10.637291    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:08:10.637298    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:08:10.637307    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:08:10.637317    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:08:10.637347    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:08:10.637358    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:08:10.637370    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:08:10.637385    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:08:10.637396    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:08:10.637404    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:08:10.637422    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:08:10.637435    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:08:10.637443    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:08:10.637450    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:08:10.637463    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:08:10.637476    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:08:10.637491    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:08:10.637504    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:08:10.637512    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:08:10.637518    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:08:10.637543    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:08:12.638373    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Attempt 7
	I0815 17:08:12.638389    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:08:12.638446    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6676
	I0815 17:08:12.639610    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Searching for a:50:e4:41:dd:ff in /var/db/dhcpd_leases ...
	I0815 17:08:12.639661    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:08:12.639673    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:08:12.639687    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:08:12.639694    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:08:12.639701    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:08:12.639707    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:08:12.639727    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:08:12.639744    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:08:12.639760    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:08:12.639773    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:08:12.639788    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:08:12.639801    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:08:12.639809    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:08:12.639816    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:08:12.639823    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:08:12.639831    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:08:12.639838    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:08:12.639851    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:08:12.639859    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:08:12.639866    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:08:12.639874    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:08:12.639881    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:08:12.639887    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:08:12.639895    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:08:14.641193    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Attempt 8
	I0815 17:08:14.641206    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:08:14.641321    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6676
	I0815 17:08:14.642290    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Searching for a:50:e4:41:dd:ff in /var/db/dhcpd_leases ...
	I0815 17:08:14.642334    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:08:14.642357    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:08:14.642367    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:08:14.642374    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:08:14.642383    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:08:14.642395    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:08:14.642407    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:08:14.642414    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:08:14.642420    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:08:14.642427    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:08:14.642439    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:08:14.642455    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:08:14.642463    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:08:14.642483    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:08:14.642492    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:08:14.642500    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:08:14.642513    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:08:14.642529    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:08:14.642540    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:08:14.642551    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:08:14.642560    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:08:14.642571    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:08:14.642581    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:08:14.642588    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:08:16.643390    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Attempt 9
	I0815 17:08:16.643402    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:08:16.643457    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6676
	I0815 17:08:16.644232    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Searching for a:50:e4:41:dd:ff in /var/db/dhcpd_leases ...
	I0815 17:08:16.644282    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:08:16.644293    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:08:16.644301    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:08:16.644307    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:08:16.644322    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:08:16.644334    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:08:16.644342    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:08:16.644348    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:08:16.644363    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:08:16.644371    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:08:16.644378    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:08:16.644385    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:08:16.644393    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:08:16.644400    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:08:16.644418    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:08:16.644432    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:08:16.644439    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:08:16.644452    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:08:16.644460    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:08:16.644467    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:08:16.644474    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:08:16.644485    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:08:16.644494    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:08:16.644503    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:08:18.646666    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Attempt 10
	I0815 17:08:18.646681    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:08:18.646734    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6676
	I0815 17:08:18.647519    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Searching for a:50:e4:41:dd:ff in /var/db/dhcpd_leases ...
	I0815 17:08:18.647579    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:08:18.647592    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:08:18.647610    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:08:18.647626    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:08:18.647636    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:08:18.647643    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:08:18.647650    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:08:18.647659    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:08:18.647672    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:08:18.647683    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:08:18.647695    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:08:18.647707    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:08:18.647715    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:08:18.647721    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:08:18.647734    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:08:18.647742    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:08:18.647748    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:08:18.647764    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:08:18.647777    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:08:18.647799    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:08:18.647826    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:08:18.647839    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:08:18.647846    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:08:18.647853    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:08:20.648839    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Attempt 11
	I0815 17:08:20.648860    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:08:20.648869    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6676
	I0815 17:08:20.649697    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Searching for a:50:e4:41:dd:ff in /var/db/dhcpd_leases ...
	I0815 17:08:20.649739    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:08:20.649749    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:08:20.649760    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:08:20.649772    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:08:20.649782    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:08:20.649798    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:08:20.649809    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:08:20.649818    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:08:20.649828    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:08:20.649837    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:08:20.649845    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:08:20.649861    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:08:20.649871    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:08:20.649884    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:08:20.649896    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:08:20.649904    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:08:20.649913    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:08:20.649919    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:08:20.649927    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:08:20.649935    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:08:20.649942    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:08:20.649949    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:08:20.649957    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:08:20.649966    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:08:22.651894    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Attempt 12
	I0815 17:08:22.651906    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:08:22.651984    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6676
	I0815 17:08:22.652904    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Searching for a:50:e4:41:dd:ff in /var/db/dhcpd_leases ...
	I0815 17:08:22.652946    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:08:22.652960    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:08:22.652977    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:08:22.652989    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:08:22.653007    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:08:22.653016    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:08:22.653029    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:08:22.653038    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:08:22.653047    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:08:22.653060    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:08:22.653068    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:08:22.653076    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:08:22.653084    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:08:22.653092    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:08:22.653099    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:08:22.653107    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:08:22.653115    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:08:22.653123    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:08:22.653137    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:08:22.653145    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:08:22.653152    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:08:22.653170    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:08:22.653178    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:08:22.653187    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:08:24.655248    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Attempt 13
	I0815 17:08:24.655264    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:08:24.655336    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6676
	I0815 17:08:24.656115    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Searching for a:50:e4:41:dd:ff in /var/db/dhcpd_leases ...
	I0815 17:08:24.656172    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:08:24.656182    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:08:24.656192    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:08:24.656205    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:08:24.656214    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:08:24.656221    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:08:24.656235    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:08:24.656248    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:08:24.656259    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:08:24.656269    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:08:24.656287    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:08:24.656296    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:08:24.656305    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:08:24.656313    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:08:24.656320    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:08:24.656328    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:08:24.656351    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:08:24.656363    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:08:24.656372    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:08:24.656380    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:08:24.656388    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:08:24.656395    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:08:24.656406    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:08:24.656416    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:08:26.657555    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Attempt 14
	I0815 17:08:26.657567    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:08:26.657629    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6676
	I0815 17:08:26.658441    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Searching for a:50:e4:41:dd:ff in /var/db/dhcpd_leases ...
	I0815 17:08:26.658495    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:08:26.658505    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:08:26.658518    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:08:26.658527    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:08:26.658540    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:08:26.658548    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:08:26.658556    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:08:26.658562    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:08:26.658571    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:08:26.658579    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:08:26.658587    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:08:26.658593    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:08:26.658614    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:08:26.658628    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:08:26.658637    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:08:26.658646    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:08:26.658658    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:08:26.658666    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:08:26.658673    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:08:26.658679    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:08:26.658686    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:08:26.658692    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:08:26.658700    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:08:26.658716    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:08:28.660243    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Attempt 15
	I0815 17:08:28.660259    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:08:28.660370    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6676
	I0815 17:08:28.661199    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Searching for a:50:e4:41:dd:ff in /var/db/dhcpd_leases ...
	I0815 17:08:28.661258    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:08:28.661270    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:08:28.661280    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:08:28.661288    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:08:28.661296    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:08:28.661302    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:08:28.661318    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:08:28.661333    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:08:28.661341    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:08:28.661352    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:08:28.661368    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:08:28.661388    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:08:28.661404    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:08:28.661415    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:08:28.661450    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:08:28.661465    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:08:28.661473    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:08:28.661482    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:08:28.661489    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:08:28.661498    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:08:28.661513    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:08:28.661521    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:08:28.661529    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:08:28.661538    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:08:30.661836    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Attempt 16
	I0815 17:08:30.661851    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:08:30.661927    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6676
	I0815 17:08:30.662706    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Searching for a:50:e4:41:dd:ff in /var/db/dhcpd_leases ...
	I0815 17:08:30.662758    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:08:30.662770    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:08:30.662784    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:08:30.662793    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:08:30.662801    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:08:30.662807    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:08:30.662813    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:08:30.662830    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:08:30.662846    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:08:30.662858    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:08:30.662875    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:08:30.662889    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:08:30.662904    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:08:30.662914    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:08:30.662924    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:08:30.662932    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:08:30.662938    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:08:30.662947    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:08:30.662954    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:08:30.662961    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:08:30.662967    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:08:30.662974    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:08:30.662987    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:08:30.663005    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:08:32.663861    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Attempt 17
	I0815 17:08:32.663874    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:08:32.663940    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6676
	I0815 17:08:32.664845    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Searching for a:50:e4:41:dd:ff in /var/db/dhcpd_leases ...
	I0815 17:08:32.664913    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:08:32.664925    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:08:32.664934    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:08:32.664941    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:08:32.664975    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:08:32.664992    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:08:32.665005    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:08:32.665013    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:08:32.665022    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:08:32.665048    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:08:32.665063    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:08:32.665078    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:08:32.665091    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:08:32.665100    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:08:32.665108    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:08:32.665128    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:08:32.665136    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:08:32.665146    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:08:32.665154    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:08:32.665166    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:08:32.665175    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:08:32.665181    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:08:32.665189    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:08:32.665207    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:08:34.667223    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Attempt 18
	I0815 17:08:34.667237    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:08:34.667295    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6676
	I0815 17:08:34.668119    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Searching for a:50:e4:41:dd:ff in /var/db/dhcpd_leases ...
	I0815 17:08:34.668168    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:08:34.668180    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:08:34.668191    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:08:34.668197    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:08:34.668204    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:08:34.668213    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:08:34.668220    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:08:34.668229    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:08:34.668237    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:08:34.668243    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:08:34.668252    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:08:34.668259    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:08:34.668268    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:08:34.668275    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:08:34.668288    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:08:34.668298    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:08:34.668304    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:08:34.668311    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:08:34.668318    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:08:34.668326    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:08:34.668334    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:08:34.668340    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:08:34.668349    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:08:34.668365    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:08:36.669943    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Attempt 19
	I0815 17:08:36.669980    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:08:36.669988    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6676
	I0815 17:08:36.670841    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Searching for a:50:e4:41:dd:ff in /var/db/dhcpd_leases ...
	I0815 17:08:36.670887    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:08:36.670897    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:08:36.670907    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:08:36.670915    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:08:36.670922    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:08:36.670929    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:08:36.670936    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:08:36.670942    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:08:36.670950    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:08:36.670957    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:08:36.670970    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:08:36.670981    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:08:36.670989    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:08:36.670997    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:08:36.671006    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:08:36.671012    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:08:36.671024    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:08:36.671038    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:08:36.671046    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:08:36.671054    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:08:36.671062    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:08:36.671070    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:08:36.671077    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:08:36.671085    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:08:38.673145    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Attempt 20
	I0815 17:08:38.673160    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:08:38.673214    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6676
	I0815 17:08:38.674017    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Searching for a:50:e4:41:dd:ff in /var/db/dhcpd_leases ...
	I0815 17:08:38.674056    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:08:38.674067    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:08:38.674083    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:08:38.674093    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:08:38.674110    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:08:38.674124    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:08:38.674141    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:08:38.674159    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:08:38.674168    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:08:38.674176    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:08:38.674191    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:08:38.674203    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:08:38.674213    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:08:38.674222    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:08:38.674230    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:08:38.674237    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:08:38.674249    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:08:38.674259    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:08:38.674272    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:08:38.674280    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:08:38.674288    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:08:38.674294    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:08:38.674301    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:08:38.674309    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:08:40.675504    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Attempt 21
	I0815 17:08:40.675525    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:08:40.675569    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6676
	I0815 17:08:40.676448    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Searching for a:50:e4:41:dd:ff in /var/db/dhcpd_leases ...
	I0815 17:08:40.676493    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:08:40.676506    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:08:40.676518    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:08:40.676524    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:08:40.676532    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:08:40.676538    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:08:40.676544    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:08:40.676551    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:08:40.676558    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:08:40.676566    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:08:40.676578    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:08:40.676587    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:08:40.676596    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:08:40.676611    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:08:40.676619    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:08:40.676628    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:08:40.676635    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:08:40.676641    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:08:40.676648    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:08:40.676656    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:08:40.676667    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:08:40.676675    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:08:40.676682    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:08:40.676690    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:08:42.677417    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Attempt 22
	I0815 17:08:42.677434    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:08:42.677503    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6676
	I0815 17:08:42.678322    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Searching for a:50:e4:41:dd:ff in /var/db/dhcpd_leases ...
	I0815 17:08:42.678366    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:08:42.678377    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:08:42.678386    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:08:42.678392    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:08:42.678399    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:08:42.678408    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:08:42.678415    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:08:42.678423    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:08:42.678431    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:08:42.678439    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:08:42.678453    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:08:42.678469    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:08:42.678481    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:08:42.678491    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:08:42.678499    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:08:42.678515    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:08:42.678524    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:08:42.678531    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:08:42.678539    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:08:42.678545    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:08:42.678552    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:08:42.678561    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:08:42.678573    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:08:42.678584    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:08:44.680040    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Attempt 23
	I0815 17:08:44.680053    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:08:44.680103    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6676
	I0815 17:08:44.680868    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Searching for a:50:e4:41:dd:ff in /var/db/dhcpd_leases ...
	I0815 17:08:44.680927    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:08:44.680944    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:08:44.680958    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:08:44.680964    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:08:44.680971    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:08:44.680981    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:08:44.680988    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:08:44.680994    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:08:44.681010    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:08:44.681022    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:08:44.681040    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:08:44.681072    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:08:44.681080    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:08:44.681113    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:08:44.681134    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:08:44.681144    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:08:44.681162    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:08:44.681170    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:08:44.681177    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:08:44.681185    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:08:44.681193    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:08:44.681201    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:08:44.681208    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:08:44.681216    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:08:46.683207    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Attempt 24
	I0815 17:08:46.683222    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:08:46.683347    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6676
	I0815 17:08:46.684266    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Searching for a:50:e4:41:dd:ff in /var/db/dhcpd_leases ...
	I0815 17:08:46.684314    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:08:46.684325    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:08:46.684344    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:08:46.684354    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:08:46.684361    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:08:46.684372    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:08:46.684386    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:08:46.684396    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:08:46.684407    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:08:46.684415    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:08:46.684424    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:08:46.684433    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:08:46.684441    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:08:46.684447    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:08:46.684455    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:08:46.684463    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:08:46.684471    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:08:46.684477    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:08:46.684484    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:08:46.684490    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:08:46.684497    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:08:46.684505    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:08:46.684520    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:08:46.684532    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:08:48.685527    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Attempt 25
	I0815 17:08:48.685540    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:08:48.685610    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6676
	I0815 17:08:48.686399    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Searching for a:50:e4:41:dd:ff in /var/db/dhcpd_leases ...
	I0815 17:08:48.686446    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:08:48.686459    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:08:48.686477    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:08:48.686487    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:08:48.686494    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:08:48.686503    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:08:48.686516    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:08:48.686528    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:08:48.686551    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:08:48.686563    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:08:48.686578    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:08:48.686592    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:08:48.686606    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:08:48.686641    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:08:48.686680    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:08:48.686688    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:08:48.686695    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:08:48.686703    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:08:48.686710    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:08:48.686719    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:08:48.686726    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:08:48.686735    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:08:48.686742    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:08:48.686750    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:08:50.688366    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Attempt 26
	I0815 17:08:50.688379    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:08:50.688427    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6676
	I0815 17:08:50.689253    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Searching for a:50:e4:41:dd:ff in /var/db/dhcpd_leases ...
	I0815 17:08:50.689295    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:08:50.689304    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:08:50.689314    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:08:50.689320    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:08:50.689328    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:08:50.689335    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:08:50.689347    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:08:50.689360    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:08:50.689366    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:08:50.689375    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:08:50.689390    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:08:50.689400    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:08:50.689407    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:08:50.689415    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:08:50.689428    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:08:50.689436    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:08:50.689445    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:08:50.689454    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:08:50.689461    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:08:50.689469    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:08:50.689477    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:08:50.689485    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:08:50.689492    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:08:50.689501    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:08:52.690246    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Attempt 27
	I0815 17:08:52.690265    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:08:52.690302    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6676
	I0815 17:08:52.691231    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Searching for a:50:e4:41:dd:ff in /var/db/dhcpd_leases ...
	I0815 17:08:52.691259    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:08:52.691271    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:08:52.691287    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:08:52.691298    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:08:52.691307    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:08:52.691317    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:08:52.691326    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:08:52.691333    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:08:52.691339    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:08:52.691346    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:08:52.691352    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:08:52.691360    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:08:52.691365    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:08:52.691371    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:08:52.691378    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:08:52.691384    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:08:52.691392    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:08:52.691400    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:08:52.691407    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:08:52.691414    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:08:52.691422    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:08:52.691431    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:08:52.691442    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:08:52.691453    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:08:54.691863    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Attempt 28
	I0815 17:08:54.691881    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:08:54.691946    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6676
	I0815 17:08:54.692732    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Searching for a:50:e4:41:dd:ff in /var/db/dhcpd_leases ...
	I0815 17:08:54.692784    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:08:54.692795    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:08:54.692807    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:08:54.692821    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:08:54.692828    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:08:54.692834    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:08:54.692842    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:08:54.692852    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:08:54.692859    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:08:54.692866    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:08:54.692877    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:08:54.692886    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:08:54.692894    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:08:54.692903    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:08:54.692927    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:08:54.692939    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:08:54.692947    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:08:54.692953    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:08:54.692960    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:08:54.692972    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:08:54.692982    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:08:54.692992    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:08:54.693000    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:08:54.693008    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:08:56.693400    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Attempt 29
	I0815 17:08:56.693421    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | exe=/Users/jenkins/minikube-integration/19452-977/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I0815 17:08:56.693458    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | hyperkit pid from json: 6676
	I0815 17:08:56.694265    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Searching for a:50:e4:41:dd:ff in /var/db/dhcpd_leases ...
	I0815 17:08:56.694318    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0815 17:08:56.694328    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:95:e6:3c:4e:19 ID:1,9a:95:e6:3c:4e:19 Lease:0x66bfe951}
	I0815 17:08:56.694336    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:1e:99:1f:b7:bd:c1 ID:1,1e:99:1f:b7:bd:c1 Lease:0x66be9743}
	I0815 17:08:56.694342    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:a:40:7e:9c:45:9d ID:1,a:40:7e:9c:45:9d Lease:0x66bfe7f5}
	I0815 17:08:56.694349    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:2e:87:21:44:98:f9 ID:1,2e:87:21:44:98:f9 Lease:0x66bfe800}
	I0815 17:08:56.694354    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:e:ee:95:d2:79:7d ID:1,e:ee:95:d2:79:7d Lease:0x66bfe7af}
	I0815 17:08:56.694374    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:8e:64:c7:b3:41:c2 ID:1,8e:64:c7:b3:41:c2 Lease:0x66bfe6b3}
	I0815 17:08:56.694380    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:96:97:e8:9f:b1:43 ID:1,96:97:e8:9f:b1:43 Lease:0x66bfe5f2}
	I0815 17:08:56.694388    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:96:16:38:90:b6:42 ID:1,96:16:38:90:b6:42 Lease:0x66bfe55b}
	I0815 17:08:56.694394    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:82:4:f8:9c:33:75 ID:1,82:4:f8:9c:33:75 Lease:0x66be9351}
	I0815 17:08:56.694401    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:66:b7:e0:28:69:8 ID:1,66:b7:e0:28:69:8 Lease:0x66bfe531}
	I0815 17:08:56.694409    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ce:c8:9d:dc:27:af ID:1,ce:c8:9d:dc:27:af Lease:0x66bfe4ed}
	I0815 17:08:56.694427    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f2:8:cd:6:1:ac ID:1,f2:8:cd:6:1:ac Lease:0x66bfe2bb}
	I0815 17:08:56.694440    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:9e:6c:4e:69:e ID:1,52:9e:6c:4e:69:e Lease:0x66bfe293}
	I0815 17:08:56.694451    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:e2:b1:b7:de:f8:91 ID:1,e2:b1:b7:de:f8:91 Lease:0x66bfe253}
	I0815 17:08:56.694460    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:43:a4:75:dc:ff ID:1,96:43:a4:75:dc:ff Lease:0x66bfe226}
	I0815 17:08:56.694468    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:51:e5:2f:51:6f ID:1,e2:51:e5:2f:51:6f Lease:0x66bfe1b2}
	I0815 17:08:56.694476    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be9094}
	I0815 17:08:56.694491    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 17:08:56.694504    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 17:08:56.694512    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 17:08:56.694521    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:82:56:9b:f8:f7 ID:1,fe:82:56:9b:f8:f7 Lease:0x66bfdd5b}
	I0815 17:08:56.694529    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:da:cb:da:f6:49:96 ID:1,da:cb:da:f6:49:96 Lease:0x66bfdcc6}
	I0815 17:08:56.694538    6590 main.go:141] libmachine: (force-systemd-env-331000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:56:2d:ca:e8:4c:c3 ID:1,56:2d:ca:e8:4c:c3 Lease:0x66bfdb49}
	I0815 17:08:58.696414    6590 client.go:171] duration metric: took 1m0.828581715s to LocalClient.Create
	I0815 17:09:00.697357    6590 start.go:128] duration metric: took 1m2.86074047s to createHost
	I0815 17:09:00.697372    6590 start.go:83] releasing machines lock for "force-systemd-env-331000", held for 1m2.860851949s
	W0815 17:09:00.697501    6590 out.go:270] * Failed to start hyperkit VM. Running "minikube delete -p force-systemd-env-331000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for a:50:e4:41:dd:ff
	* Failed to start hyperkit VM. Running "minikube delete -p force-systemd-env-331000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for a:50:e4:41:dd:ff
	I0815 17:09:00.778695    6590 out.go:201] 
	W0815 17:09:00.815552    6590 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for a:50:e4:41:dd:ff
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for a:50:e4:41:dd:ff
	W0815 17:09:00.815565    6590 out.go:270] * 
	* 
	W0815 17:09:00.816318    6590 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 17:09:00.910424    6590 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-env-331000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-331000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-env-331000 ssh "docker info --format {{.CgroupDriver}}": exit status 50 (172.5405ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node force-systemd-env-331000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-env-331000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 50
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-08-15 17:09:01.24672 -0700 PDT m=+3848.128699072
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-331000 -n force-systemd-env-331000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-331000 -n force-systemd-env-331000: exit status 7 (79.342218ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 17:09:01.324155    6697 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0815 17:09:01.324176    6697 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-331000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "force-systemd-env-331000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-331000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-331000: (5.245446734s)
--- FAIL: TestForceSystemdEnv (200.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (148.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-138000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-amd64 stop -p ha-138000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-amd64 stop -p ha-138000 -v=7 --alsologtostderr: (27.09786145s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-138000 --wait=true -v=7 --alsologtostderr
E0815 16:25:21.702621    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/functional-506000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ha-138000 --wait=true -v=7 --alsologtostderr: exit status 90 (1m58.798056333s)

                                                
                                                
-- stdout --
	* [ha-138000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-977/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-977/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "ha-138000" primary control-plane node in "ha-138000" cluster
	* Restarting existing hyperkit VM for "ha-138000" ...
	* Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	* Enabled addons: 
	
	* Starting "ha-138000-m02" control-plane node in "ha-138000" cluster
	* Restarting existing hyperkit VM for "ha-138000-m02" ...
	* Found network options:
	  - NO_PROXY=192.169.0.5
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 16:24:31.233096    3649 out.go:345] Setting OutFile to fd 1 ...
	I0815 16:24:31.233281    3649 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:24:31.233287    3649 out.go:358] Setting ErrFile to fd 2...
	I0815 16:24:31.233290    3649 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:24:31.233463    3649 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-977/.minikube/bin
	I0815 16:24:31.234892    3649 out.go:352] Setting JSON to false
	I0815 16:24:31.259609    3649 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":1442,"bootTime":1723762829,"procs":422,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0815 16:24:31.259835    3649 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 16:24:31.281220    3649 out.go:177] * [ha-138000] minikube v1.33.1 on Darwin 14.6.1
	I0815 16:24:31.323339    3649 out.go:177]   - MINIKUBE_LOCATION=19452
	I0815 16:24:31.323394    3649 notify.go:220] Checking for updates...
	I0815 16:24:31.366134    3649 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19452-977/kubeconfig
	I0815 16:24:31.387302    3649 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0815 16:24:31.408076    3649 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 16:24:31.429265    3649 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-977/.minikube
	I0815 16:24:31.450282    3649 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 16:24:31.472864    3649 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:24:31.473038    3649 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 16:24:31.473723    3649 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:24:31.473802    3649 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:24:31.483475    3649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52024
	I0815 16:24:31.483866    3649 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:24:31.484264    3649 main.go:141] libmachine: Using API Version  1
	I0815 16:24:31.484274    3649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:24:31.484483    3649 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:24:31.484590    3649 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:24:31.513331    3649 out.go:177] * Using the hyperkit driver based on existing profile
	I0815 16:24:31.555013    3649 start.go:297] selected driver: hyperkit
	I0815 16:24:31.555040    3649 start.go:901] validating driver "hyperkit" against &{Name:ha-138000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-138000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:fals
e efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 16:24:31.555294    3649 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 16:24:31.555482    3649 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 16:24:31.555679    3649 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19452-977/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0815 16:24:31.565322    3649 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0815 16:24:31.570113    3649 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:24:31.570133    3649 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0815 16:24:31.573295    3649 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 16:24:31.573376    3649 cni.go:84] Creating CNI manager for ""
	I0815 16:24:31.573385    3649 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0815 16:24:31.573458    3649 start.go:340] cluster config:
	{Name:ha-138000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-138000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-t
iller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 16:24:31.573576    3649 iso.go:125] acquiring lock: {Name:mkb93fc66b60be15dffa9eb2999a4be8d8d4d218 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 16:24:31.616257    3649 out.go:177] * Starting "ha-138000" primary control-plane node in "ha-138000" cluster
	I0815 16:24:31.636985    3649 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 16:24:31.637060    3649 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19452-977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0815 16:24:31.637085    3649 cache.go:56] Caching tarball of preloaded images
	I0815 16:24:31.637273    3649 preload.go:172] Found /Users/jenkins/minikube-integration/19452-977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0815 16:24:31.637292    3649 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 16:24:31.637487    3649 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/config.json ...
	I0815 16:24:31.638371    3649 start.go:360] acquireMachinesLock for ha-138000: {Name:mkac8034c8a60617271f2754a03dcaf74fc4fef7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 16:24:31.638490    3649 start.go:364] duration metric: took 82.356µs to acquireMachinesLock for "ha-138000"
	I0815 16:24:31.638525    3649 start.go:96] Skipping create...Using existing machine configuration
	I0815 16:24:31.638544    3649 fix.go:54] fixHost starting: 
	I0815 16:24:31.638958    3649 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:24:31.639008    3649 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:24:31.648062    3649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52026
	I0815 16:24:31.648421    3649 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:24:31.648791    3649 main.go:141] libmachine: Using API Version  1
	I0815 16:24:31.648804    3649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:24:31.649022    3649 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:24:31.649142    3649 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:24:31.649278    3649 main.go:141] libmachine: (ha-138000) Calling .GetState
	I0815 16:24:31.649372    3649 main.go:141] libmachine: (ha-138000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:24:31.649446    3649 main.go:141] libmachine: (ha-138000) DBG | hyperkit pid from json: 3071
	I0815 16:24:31.650352    3649 main.go:141] libmachine: (ha-138000) DBG | hyperkit pid 3071 missing from process table
	I0815 16:24:31.650388    3649 fix.go:112] recreateIfNeeded on ha-138000: state=Stopped err=<nil>
	I0815 16:24:31.650403    3649 main.go:141] libmachine: (ha-138000) Calling .DriverName
	W0815 16:24:31.650489    3649 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 16:24:31.698042    3649 out.go:177] * Restarting existing hyperkit VM for "ha-138000" ...
	I0815 16:24:31.718584    3649 main.go:141] libmachine: (ha-138000) Calling .Start
	I0815 16:24:31.718879    3649 main.go:141] libmachine: (ha-138000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:24:31.718940    3649 main.go:141] libmachine: (ha-138000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/hyperkit.pid
	I0815 16:24:31.721002    3649 main.go:141] libmachine: (ha-138000) DBG | hyperkit pid 3071 missing from process table
	I0815 16:24:31.721020    3649 main.go:141] libmachine: (ha-138000) DBG | pid 3071 is in state "Stopped"
	I0815 16:24:31.721044    3649 main.go:141] libmachine: (ha-138000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/hyperkit.pid...
	I0815 16:24:31.721441    3649 main.go:141] libmachine: (ha-138000) DBG | Using UUID bf1b12d0-37a9-4c04-a028-0dd0a6dcd337
	I0815 16:24:31.829003    3649 main.go:141] libmachine: (ha-138000) DBG | Generated MAC 66:4d:cd:54:35:15
	I0815 16:24:31.829029    3649 main.go:141] libmachine: (ha-138000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000
	I0815 16:24:31.829133    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:31 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"bf1b12d0-37a9-4c04-a028-0dd0a6dcd337", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c24e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0815 16:24:31.829169    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:31 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"bf1b12d0-37a9-4c04-a028-0dd0a6dcd337", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c24e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0815 16:24:31.829203    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:31 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "bf1b12d0-37a9-4c04-a028-0dd0a6dcd337", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/ha-138000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/tty,log=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/bzimage,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/initrd,earlyprintk=serial l
oglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000"}
	I0815 16:24:31.829238    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:31 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U bf1b12d0-37a9-4c04-a028-0dd0a6dcd337 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/ha-138000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/tty,log=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/console-ring -f kexec,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/bzimage,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset noresto
re waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000"
	I0815 16:24:31.829247    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:31 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0815 16:24:31.830765    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:31 DEBUG: hyperkit: Pid is 3662
	I0815 16:24:31.831139    3649 main.go:141] libmachine: (ha-138000) DBG | Attempt 0
	I0815 16:24:31.831155    3649 main.go:141] libmachine: (ha-138000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:24:31.831242    3649 main.go:141] libmachine: (ha-138000) DBG | hyperkit pid from json: 3662
	I0815 16:24:31.832840    3649 main.go:141] libmachine: (ha-138000) DBG | Searching for 66:4d:cd:54:35:15 in /var/db/dhcpd_leases ...
	I0815 16:24:31.832917    3649 main.go:141] libmachine: (ha-138000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0815 16:24:31.832934    3649 main.go:141] libmachine: (ha-138000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be8e15}
	I0815 16:24:31.832943    3649 main.go:141] libmachine: (ha-138000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfdf74}
	I0815 16:24:31.832962    3649 main.go:141] libmachine: (ha-138000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfdedc}
	I0815 16:24:31.832970    3649 main.go:141] libmachine: (ha-138000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfde64}
	I0815 16:24:31.832977    3649 main.go:141] libmachine: (ha-138000) DBG | Found match: 66:4d:cd:54:35:15
	I0815 16:24:31.833028    3649 main.go:141] libmachine: (ha-138000) DBG | IP: 192.169.0.5
	I0815 16:24:31.833038    3649 main.go:141] libmachine: (ha-138000) Calling .GetConfigRaw
	I0815 16:24:31.833705    3649 main.go:141] libmachine: (ha-138000) Calling .GetIP
	I0815 16:24:31.833895    3649 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/config.json ...
	I0815 16:24:31.834359    3649 machine.go:93] provisionDockerMachine start ...
	I0815 16:24:31.834370    3649 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:24:31.834509    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:24:31.834611    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:24:31.834733    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:31.834881    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:31.834976    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:24:31.835114    3649 main.go:141] libmachine: Using SSH client type: native
	I0815 16:24:31.835296    3649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x42caea0] 0x42cdc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0815 16:24:31.835304    3649 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 16:24:31.838795    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:31 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0815 16:24:31.891055    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:31 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0815 16:24:31.891732    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0815 16:24:31.891746    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0815 16:24:31.891753    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0815 16:24:31.891763    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0815 16:24:32.275543    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:32 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0815 16:24:32.275556    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:32 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0815 16:24:32.390162    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0815 16:24:32.390181    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0815 16:24:32.390193    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0815 16:24:32.390217    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0815 16:24:32.391060    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:32 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0815 16:24:32.391070    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:32 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0815 16:24:37.953601    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:37 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0815 16:24:37.953741    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:37 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0815 16:24:37.953751    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:37 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0815 16:24:37.980241    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:37 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0815 16:24:42.910400    3649 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 16:24:42.910418    3649 main.go:141] libmachine: (ha-138000) Calling .GetMachineName
	I0815 16:24:42.910559    3649 buildroot.go:166] provisioning hostname "ha-138000"
	I0815 16:24:42.910571    3649 main.go:141] libmachine: (ha-138000) Calling .GetMachineName
	I0815 16:24:42.910673    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:24:42.910777    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:24:42.910859    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:42.910959    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:42.911045    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:24:42.911177    3649 main.go:141] libmachine: Using SSH client type: native
	I0815 16:24:42.911343    3649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x42caea0] 0x42cdc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0815 16:24:42.911352    3649 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-138000 && echo "ha-138000" | sudo tee /etc/hostname
	I0815 16:24:42.985179    3649 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-138000
	
	I0815 16:24:42.985199    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:24:42.985338    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:24:42.985446    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:42.985538    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:42.985614    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:24:42.985749    3649 main.go:141] libmachine: Using SSH client type: native
	I0815 16:24:42.985891    3649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x42caea0] 0x42cdc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0815 16:24:42.985905    3649 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-138000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-138000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-138000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 16:24:43.055472    3649 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 16:24:43.055491    3649 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19452-977/.minikube CaCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19452-977/.minikube}
	I0815 16:24:43.055508    3649 buildroot.go:174] setting up certificates
	I0815 16:24:43.055515    3649 provision.go:84] configureAuth start
	I0815 16:24:43.055522    3649 main.go:141] libmachine: (ha-138000) Calling .GetMachineName
	I0815 16:24:43.055669    3649 main.go:141] libmachine: (ha-138000) Calling .GetIP
	I0815 16:24:43.055769    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:24:43.055868    3649 provision.go:143] copyHostCerts
	I0815 16:24:43.055901    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem
	I0815 16:24:43.055963    3649 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem, removing ...
	I0815 16:24:43.055971    3649 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem
	I0815 16:24:43.056106    3649 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem (1082 bytes)
	I0815 16:24:43.056322    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem
	I0815 16:24:43.056353    3649 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem, removing ...
	I0815 16:24:43.056358    3649 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem
	I0815 16:24:43.056432    3649 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem (1123 bytes)
	I0815 16:24:43.056583    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem
	I0815 16:24:43.056611    3649 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem, removing ...
	I0815 16:24:43.056615    3649 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem
	I0815 16:24:43.056681    3649 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem (1679 bytes)
	I0815 16:24:43.056840    3649 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem org=jenkins.ha-138000 san=[127.0.0.1 192.169.0.5 ha-138000 localhost minikube]
	I0815 16:24:43.121501    3649 provision.go:177] copyRemoteCerts
	I0815 16:24:43.121552    3649 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 16:24:43.121568    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:24:43.121697    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:24:43.121782    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:43.121880    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:24:43.121971    3649 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/id_rsa Username:docker}
	I0815 16:24:43.165154    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0815 16:24:43.165236    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 16:24:43.200018    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0815 16:24:43.200092    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0815 16:24:43.220757    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0815 16:24:43.220829    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 16:24:43.240667    3649 provision.go:87] duration metric: took 185.141163ms to configureAuth
	I0815 16:24:43.240680    3649 buildroot.go:189] setting minikube options for container-runtime
	I0815 16:24:43.240849    3649 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:24:43.240863    3649 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:24:43.240998    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:24:43.241100    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:24:43.241183    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:43.241273    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:43.241367    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:24:43.241484    3649 main.go:141] libmachine: Using SSH client type: native
	I0815 16:24:43.241652    3649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x42caea0] 0x42cdc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0815 16:24:43.241660    3649 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0815 16:24:43.302884    3649 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0815 16:24:43.302897    3649 buildroot.go:70] root file system type: tmpfs
	I0815 16:24:43.302965    3649 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0815 16:24:43.302977    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:24:43.303108    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:24:43.303198    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:43.303278    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:43.303364    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:24:43.303495    3649 main.go:141] libmachine: Using SSH client type: native
	I0815 16:24:43.303638    3649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x42caea0] 0x42cdc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0815 16:24:43.303683    3649 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0815 16:24:43.378222    3649 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0815 16:24:43.378246    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:24:43.378382    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:24:43.378461    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:43.378563    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:43.378649    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:24:43.378787    3649 main.go:141] libmachine: Using SSH client type: native
	I0815 16:24:43.378932    3649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x42caea0] 0x42cdc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0815 16:24:43.378946    3649 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0815 16:24:45.080555    3649 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0815 16:24:45.080572    3649 machine.go:96] duration metric: took 13.246248166s to provisionDockerMachine
	I0815 16:24:45.080585    3649 start.go:293] postStartSetup for "ha-138000" (driver="hyperkit")
	I0815 16:24:45.080595    3649 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 16:24:45.080616    3649 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:24:45.080791    3649 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 16:24:45.080805    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:24:45.080908    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:24:45.080996    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:45.081081    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:24:45.081171    3649 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/id_rsa Username:docker}
	I0815 16:24:45.119742    3649 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 16:24:45.122978    3649 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 16:24:45.122994    3649 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19452-977/.minikube/addons for local assets ...
	I0815 16:24:45.123095    3649 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19452-977/.minikube/files for local assets ...
	I0815 16:24:45.123274    3649 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> 14982.pem in /etc/ssl/certs
	I0815 16:24:45.123280    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> /etc/ssl/certs/14982.pem
	I0815 16:24:45.123473    3649 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 16:24:45.130896    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem --> /etc/ssl/certs/14982.pem (1708 bytes)
	I0815 16:24:45.150554    3649 start.go:296] duration metric: took 69.960327ms for postStartSetup
	I0815 16:24:45.150578    3649 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:24:45.150756    3649 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0815 16:24:45.150769    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:24:45.150849    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:24:45.150943    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:45.151041    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:24:45.151122    3649 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/id_rsa Username:docker}
	I0815 16:24:45.187860    3649 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0815 16:24:45.187918    3649 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0815 16:24:45.240522    3649 fix.go:56] duration metric: took 13.602028125s for fixHost
	I0815 16:24:45.240543    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:24:45.240694    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:24:45.240782    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:45.240866    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:45.240953    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:24:45.241079    3649 main.go:141] libmachine: Using SSH client type: native
	I0815 16:24:45.241222    3649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x42caea0] 0x42cdc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0815 16:24:45.241230    3649 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 16:24:45.308498    3649 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723764285.546205797
	
	I0815 16:24:45.308509    3649 fix.go:216] guest clock: 1723764285.546205797
	I0815 16:24:45.308515    3649 fix.go:229] Guest: 2024-08-15 16:24:45.546205797 -0700 PDT Remote: 2024-08-15 16:24:45.240533 -0700 PDT m=+14.043250910 (delta=305.672797ms)
	I0815 16:24:45.308536    3649 fix.go:200] guest clock delta is within tolerance: 305.672797ms
	I0815 16:24:45.308540    3649 start.go:83] releasing machines lock for "ha-138000", held for 13.670085598s
	I0815 16:24:45.308562    3649 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:24:45.308691    3649 main.go:141] libmachine: (ha-138000) Calling .GetIP
	I0815 16:24:45.308815    3649 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:24:45.309125    3649 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:24:45.309228    3649 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:24:45.309333    3649 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 16:24:45.309348    3649 ssh_runner.go:195] Run: cat /version.json
	I0815 16:24:45.309359    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:24:45.309374    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:24:45.309454    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:24:45.309481    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:24:45.309570    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:45.309586    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:45.309666    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:24:45.309673    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:24:45.309753    3649 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/id_rsa Username:docker}
	I0815 16:24:45.309764    3649 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/id_rsa Username:docker}
	I0815 16:24:45.353596    3649 ssh_runner.go:195] Run: systemctl --version
	I0815 16:24:45.358729    3649 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 16:24:45.412525    3649 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 16:24:45.412627    3649 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 16:24:45.428066    3649 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 16:24:45.428077    3649 start.go:495] detecting cgroup driver to use...
	I0815 16:24:45.428183    3649 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 16:24:45.444602    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0815 16:24:45.453384    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0815 16:24:45.462134    3649 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0815 16:24:45.462180    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0815 16:24:45.470781    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 16:24:45.479385    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0815 16:24:45.487960    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 16:24:45.496691    3649 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 16:24:45.505669    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0815 16:24:45.514277    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0815 16:24:45.522851    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0815 16:24:45.531584    3649 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 16:24:45.539529    3649 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 16:24:45.547375    3649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:24:45.642699    3649 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0815 16:24:45.657803    3649 start.go:495] detecting cgroup driver to use...
	I0815 16:24:45.657881    3649 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0815 16:24:45.669244    3649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 16:24:45.680074    3649 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 16:24:45.692718    3649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 16:24:45.703066    3649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0815 16:24:45.713234    3649 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0815 16:24:45.735236    3649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0815 16:24:45.745677    3649 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 16:24:45.760852    3649 ssh_runner.go:195] Run: which cri-dockerd
	I0815 16:24:45.763929    3649 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0815 16:24:45.771021    3649 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0815 16:24:45.784172    3649 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0815 16:24:45.887215    3649 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0815 16:24:45.995634    3649 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0815 16:24:45.995716    3649 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0815 16:24:46.010389    3649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:24:46.126522    3649 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0815 16:24:48.464685    3649 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.338152009s)
	I0815 16:24:48.464761    3649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0815 16:24:48.475831    3649 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0815 16:24:48.490512    3649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0815 16:24:48.501692    3649 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0815 16:24:48.596754    3649 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0815 16:24:48.705379    3649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:24:48.807279    3649 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0815 16:24:48.821232    3649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0815 16:24:48.832145    3649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:24:48.931537    3649 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0815 16:24:48.994946    3649 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0815 16:24:48.995028    3649 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0815 16:24:48.999199    3649 start.go:563] Will wait 60s for crictl version
	I0815 16:24:48.999246    3649 ssh_runner.go:195] Run: which crictl
	I0815 16:24:49.002242    3649 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 16:24:49.031023    3649 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0815 16:24:49.031095    3649 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0815 16:24:49.049391    3649 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0815 16:24:49.110204    3649 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0815 16:24:49.110253    3649 main.go:141] libmachine: (ha-138000) Calling .GetIP
	I0815 16:24:49.110630    3649 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0815 16:24:49.114885    3649 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 16:24:49.125317    3649 kubeadm.go:883] updating cluster {Name:ha-138000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:ha-138000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false f
reshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 16:24:49.125409    3649 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 16:24:49.125461    3649 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0815 16:24:49.138389    3649 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0815 16:24:49.138400    3649 docker.go:615] Images already preloaded, skipping extraction
	I0815 16:24:49.138469    3649 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0815 16:24:49.152217    3649 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0815 16:24:49.152236    3649 cache_images.go:84] Images are preloaded, skipping loading
	I0815 16:24:49.152245    3649 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.0 docker true true} ...
	I0815 16:24:49.152316    3649 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-138000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-138000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 16:24:49.152387    3649 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0815 16:24:49.188207    3649 cni.go:84] Creating CNI manager for ""
	I0815 16:24:49.188219    3649 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0815 16:24:49.188233    3649 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 16:24:49.188247    3649 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-138000 NodeName:ha-138000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 16:24:49.188328    3649 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-138000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 16:24:49.188340    3649 kube-vip.go:115] generating kube-vip config ...
	I0815 16:24:49.188395    3649 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0815 16:24:49.201717    3649 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0815 16:24:49.201810    3649 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0815 16:24:49.201860    3649 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 16:24:49.210773    3649 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 16:24:49.210821    3649 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0815 16:24:49.218705    3649 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0815 16:24:49.232092    3649 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 16:24:49.245488    3649 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0815 16:24:49.259182    3649 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0815 16:24:49.272667    3649 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0815 16:24:49.275463    3649 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 16:24:49.285341    3649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:24:49.379165    3649 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 16:24:49.393690    3649 certs.go:68] Setting up /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000 for IP: 192.169.0.5
	I0815 16:24:49.393701    3649 certs.go:194] generating shared ca certs ...
	I0815 16:24:49.393711    3649 certs.go:226] acquiring lock for ca certs: {Name:mka1c88019f6064d2983ac988db71a67aeb65696 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:24:49.393886    3649 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.key
	I0815 16:24:49.393940    3649 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.key
	I0815 16:24:49.393952    3649 certs.go:256] generating profile certs ...
	I0815 16:24:49.394054    3649 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/client.key
	I0815 16:24:49.394074    3649 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.key.7af4c91a
	I0815 16:24:49.394091    3649 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.crt.7af4c91a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.7 192.169.0.254]
	I0815 16:24:49.771714    3649 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.crt.7af4c91a ...
	I0815 16:24:49.771738    3649 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.crt.7af4c91a: {Name:mkfdf96fafb98f174dadc5b6379869463c2a6ca3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:24:49.772085    3649 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.key.7af4c91a ...
	I0815 16:24:49.772094    3649 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.key.7af4c91a: {Name:mk0c2b233ae670508e502baf145f82fc5c8af979 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:24:49.772311    3649 certs.go:381] copying /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.crt.7af4c91a -> /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.crt
	I0815 16:24:49.772506    3649 certs.go:385] copying /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.key.7af4c91a -> /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.key
	I0815 16:24:49.772728    3649 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.key
	I0815 16:24:49.772737    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0815 16:24:49.772760    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0815 16:24:49.772779    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0815 16:24:49.772798    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0815 16:24:49.772818    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0815 16:24:49.772836    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0815 16:24:49.772855    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0815 16:24:49.772873    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0815 16:24:49.772972    3649 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498.pem (1338 bytes)
	W0815 16:24:49.773012    3649 certs.go:480] ignoring /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498_empty.pem, impossibly tiny 0 bytes
	I0815 16:24:49.773021    3649 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 16:24:49.773066    3649 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem (1082 bytes)
	I0815 16:24:49.773106    3649 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem (1123 bytes)
	I0815 16:24:49.773135    3649 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem (1679 bytes)
	I0815 16:24:49.773201    3649 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem (1708 bytes)
	I0815 16:24:49.773235    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> /usr/share/ca-certificates/14982.pem
	I0815 16:24:49.773257    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:24:49.773276    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498.pem -> /usr/share/ca-certificates/1498.pem
	I0815 16:24:49.773761    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 16:24:49.799905    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 16:24:49.819857    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 16:24:49.839446    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 16:24:49.859479    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0815 16:24:49.878979    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 16:24:49.898857    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 16:24:49.918488    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 16:24:49.938289    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem --> /usr/share/ca-certificates/14982.pem (1708 bytes)
	I0815 16:24:49.958067    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 16:24:49.977508    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498.pem --> /usr/share/ca-certificates/1498.pem (1338 bytes)
	I0815 16:24:49.997111    3649 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 16:24:50.010415    3649 ssh_runner.go:195] Run: openssl version
	I0815 16:24:50.014564    3649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14982.pem && ln -fs /usr/share/ca-certificates/14982.pem /etc/ssl/certs/14982.pem"
	I0815 16:24:50.022762    3649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14982.pem
	I0815 16:24:50.025974    3649 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 23:14 /usr/share/ca-certificates/14982.pem
	I0815 16:24:50.026012    3649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14982.pem
	I0815 16:24:50.030247    3649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14982.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 16:24:50.038688    3649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 16:24:50.046935    3649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:24:50.050205    3649 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:05 /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:24:50.050240    3649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:24:50.054437    3649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 16:24:50.062668    3649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1498.pem && ln -fs /usr/share/ca-certificates/1498.pem /etc/ssl/certs/1498.pem"
	I0815 16:24:50.070835    3649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1498.pem
	I0815 16:24:50.074144    3649 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 23:14 /usr/share/ca-certificates/1498.pem
	I0815 16:24:50.074179    3649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1498.pem
	I0815 16:24:50.078407    3649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1498.pem /etc/ssl/certs/51391683.0"
	I0815 16:24:50.087458    3649 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 16:24:50.090800    3649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 16:24:50.095300    3649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 16:24:50.099454    3649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 16:24:50.104181    3649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 16:24:50.108451    3649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 16:24:50.112679    3649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 16:24:50.116963    3649 kubeadm.go:392] StartCluster: {Name:ha-138000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:ha-138000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 16:24:50.117082    3649 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0815 16:24:50.130554    3649 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 16:24:50.137992    3649 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 16:24:50.138004    3649 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 16:24:50.138048    3649 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 16:24:50.145558    3649 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 16:24:50.145859    3649 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-138000" does not appear in /Users/jenkins/minikube-integration/19452-977/kubeconfig
	I0815 16:24:50.145940    3649 kubeconfig.go:62] /Users/jenkins/minikube-integration/19452-977/kubeconfig needs updating (will repair): [kubeconfig missing "ha-138000" cluster setting kubeconfig missing "ha-138000" context setting]
	I0815 16:24:50.146137    3649 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-977/kubeconfig: {Name:mk0f04b0199f84dc20eb294d5b790451b41e43fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:24:50.146558    3649 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19452-977/kubeconfig
	I0815 16:24:50.146752    3649 kapi.go:59] client config for ha-138000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/client.key", CAFile:"/Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Use
rAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x5983f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0815 16:24:50.147060    3649 cert_rotation.go:140] Starting client certificate rotation controller
	I0815 16:24:50.147235    3649 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 16:24:50.154308    3649 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0815 16:24:50.154320    3649 kubeadm.go:597] duration metric: took 16.312125ms to restartPrimaryControlPlane
	I0815 16:24:50.154325    3649 kubeadm.go:394] duration metric: took 37.367941ms to StartCluster
	I0815 16:24:50.154333    3649 settings.go:142] acquiring lock: {Name:mk694dad19d37394fa6b13c51a7dc54b62e97c12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:24:50.154408    3649 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19452-977/kubeconfig
	I0815 16:24:50.154767    3649 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-977/kubeconfig: {Name:mk0f04b0199f84dc20eb294d5b790451b41e43fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:24:50.154992    3649 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 16:24:50.155005    3649 start.go:241] waiting for startup goroutines ...
	I0815 16:24:50.155016    3649 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 16:24:50.155148    3649 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:24:50.196433    3649 out.go:177] * Enabled addons: 
	I0815 16:24:50.217474    3649 addons.go:510] duration metric: took 62.454726ms for enable addons: enabled=[]
	I0815 16:24:50.217512    3649 start.go:246] waiting for cluster config update ...
	I0815 16:24:50.217524    3649 start.go:255] writing updated cluster config ...
	I0815 16:24:50.239613    3649 out.go:201] 
	I0815 16:24:50.260810    3649 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:24:50.260937    3649 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/config.json ...
	I0815 16:24:50.282712    3649 out.go:177] * Starting "ha-138000-m02" control-plane node in "ha-138000" cluster
	I0815 16:24:50.324521    3649 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 16:24:50.324584    3649 cache.go:56] Caching tarball of preloaded images
	I0815 16:24:50.324754    3649 preload.go:172] Found /Users/jenkins/minikube-integration/19452-977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0815 16:24:50.324772    3649 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 16:24:50.324901    3649 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/config.json ...
	I0815 16:24:50.325802    3649 start.go:360] acquireMachinesLock for ha-138000-m02: {Name:mkac8034c8a60617271f2754a03dcaf74fc4fef7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 16:24:50.325911    3649 start.go:364] duration metric: took 84.439µs to acquireMachinesLock for "ha-138000-m02"
	I0815 16:24:50.325938    3649 start.go:96] Skipping create...Using existing machine configuration
	I0815 16:24:50.325946    3649 fix.go:54] fixHost starting: m02
	I0815 16:24:50.326424    3649 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:24:50.326451    3649 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:24:50.335682    3649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52048
	I0815 16:24:50.336051    3649 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:24:50.336443    3649 main.go:141] libmachine: Using API Version  1
	I0815 16:24:50.336459    3649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:24:50.336675    3649 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:24:50.336791    3649 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:24:50.336888    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetState
	I0815 16:24:50.336961    3649 main.go:141] libmachine: (ha-138000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:24:50.337044    3649 main.go:141] libmachine: (ha-138000-m02) DBG | hyperkit pid from json: 3600
	I0815 16:24:50.337930    3649 main.go:141] libmachine: (ha-138000-m02) DBG | hyperkit pid 3600 missing from process table
	I0815 16:24:50.337962    3649 fix.go:112] recreateIfNeeded on ha-138000-m02: state=Stopped err=<nil>
	I0815 16:24:50.337972    3649 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	W0815 16:24:50.338053    3649 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 16:24:50.379676    3649 out.go:177] * Restarting existing hyperkit VM for "ha-138000-m02" ...
	I0815 16:24:50.400415    3649 main.go:141] libmachine: (ha-138000-m02) Calling .Start
	I0815 16:24:50.400691    3649 main.go:141] libmachine: (ha-138000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:24:50.400747    3649 main.go:141] libmachine: (ha-138000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/hyperkit.pid
	I0815 16:24:50.402488    3649 main.go:141] libmachine: (ha-138000-m02) DBG | hyperkit pid 3600 missing from process table
	I0815 16:24:50.402502    3649 main.go:141] libmachine: (ha-138000-m02) DBG | pid 3600 is in state "Stopped"
	I0815 16:24:50.402518    3649 main.go:141] libmachine: (ha-138000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/hyperkit.pid...
	I0815 16:24:50.402857    3649 main.go:141] libmachine: (ha-138000-m02) DBG | Using UUID 4cff9b5a-9fe3-4215-9139-05f05b79bce3
	I0815 16:24:50.432166    3649 main.go:141] libmachine: (ha-138000-m02) DBG | Generated MAC 9a:c2:e9:d7:1c:58
	I0815 16:24:50.432194    3649 main.go:141] libmachine: (ha-138000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000
	I0815 16:24:50.432283    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"4cff9b5a-9fe3-4215-9139-05f05b79bce3", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003b06c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0815 16:24:50.432316    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"4cff9b5a-9fe3-4215-9139-05f05b79bce3", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003b06c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0815 16:24:50.432360    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "4cff9b5a-9fe3-4215-9139-05f05b79bce3", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/ha-138000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/tty,log=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/bzimage,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-13
8000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000"}
	I0815 16:24:50.432400    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 4cff9b5a-9fe3-4215-9139-05f05b79bce3 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/ha-138000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/tty,log=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/bzimage,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 co
nsole=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000"
	I0815 16:24:50.432410    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0815 16:24:50.433800    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 DEBUG: hyperkit: Pid is 3670
	I0815 16:24:50.434270    3649 main.go:141] libmachine: (ha-138000-m02) DBG | Attempt 0
	I0815 16:24:50.434284    3649 main.go:141] libmachine: (ha-138000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:24:50.434361    3649 main.go:141] libmachine: (ha-138000-m02) DBG | hyperkit pid from json: 3670
	I0815 16:24:50.436313    3649 main.go:141] libmachine: (ha-138000-m02) DBG | Searching for 9a:c2:e9:d7:1c:58 in /var/db/dhcpd_leases ...
	I0815 16:24:50.436365    3649 main.go:141] libmachine: (ha-138000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0815 16:24:50.436381    3649 main.go:141] libmachine: (ha-138000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfdfb9}
	I0815 16:24:50.436395    3649 main.go:141] libmachine: (ha-138000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be8e15}
	I0815 16:24:50.436408    3649 main.go:141] libmachine: (ha-138000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfdf74}
	I0815 16:24:50.436429    3649 main.go:141] libmachine: (ha-138000-m02) DBG | Found match: 9a:c2:e9:d7:1c:58
	I0815 16:24:50.436463    3649 main.go:141] libmachine: (ha-138000-m02) DBG | IP: 192.169.0.6
	I0815 16:24:50.436476    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetConfigRaw
	I0815 16:24:50.437131    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetIP
	I0815 16:24:50.437308    3649 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/config.json ...
	I0815 16:24:50.437758    3649 machine.go:93] provisionDockerMachine start ...
	I0815 16:24:50.437768    3649 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:24:50.437887    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:24:50.437997    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:24:50.438094    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:24:50.438199    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:24:50.438287    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:24:50.438398    3649 main.go:141] libmachine: Using SSH client type: native
	I0815 16:24:50.438546    3649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x42caea0] 0x42cdc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0815 16:24:50.438554    3649 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 16:24:50.441514    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0815 16:24:50.450166    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0815 16:24:50.451006    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0815 16:24:50.451024    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0815 16:24:50.451053    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0815 16:24:50.451081    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0815 16:24:50.836828    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0815 16:24:50.836848    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0815 16:24:50.951307    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0815 16:24:50.951325    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0815 16:24:50.951354    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0815 16:24:50.951377    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0815 16:24:50.952254    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0815 16:24:50.952268    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0815 16:24:56.551926    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:56 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0815 16:24:56.551945    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:56 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0815 16:24:56.551957    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:56 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0815 16:24:56.576187    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:56 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0815 16:25:25.506687    3649 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 16:25:25.506701    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetMachineName
	I0815 16:25:25.506833    3649 buildroot.go:166] provisioning hostname "ha-138000-m02"
	I0815 16:25:25.506845    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetMachineName
	I0815 16:25:25.506942    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:25:25.507027    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:25:25.507110    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:25.507196    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:25.507274    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:25:25.507413    3649 main.go:141] libmachine: Using SSH client type: native
	I0815 16:25:25.507576    3649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x42caea0] 0x42cdc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0815 16:25:25.507586    3649 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-138000-m02 && echo "ha-138000-m02" | sudo tee /etc/hostname
	I0815 16:25:25.578727    3649 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-138000-m02
	
	I0815 16:25:25.578742    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:25:25.578877    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:25:25.578967    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:25.579045    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:25.579129    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:25:25.579269    3649 main.go:141] libmachine: Using SSH client type: native
	I0815 16:25:25.579419    3649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x42caea0] 0x42cdc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0815 16:25:25.579432    3649 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-138000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-138000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-138000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 16:25:25.645270    3649 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 16:25:25.645285    3649 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19452-977/.minikube CaCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19452-977/.minikube}
	I0815 16:25:25.645301    3649 buildroot.go:174] setting up certificates
	I0815 16:25:25.645307    3649 provision.go:84] configureAuth start
	I0815 16:25:25.645342    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetMachineName
	I0815 16:25:25.645472    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetIP
	I0815 16:25:25.645569    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:25:25.645659    3649 provision.go:143] copyHostCerts
	I0815 16:25:25.645686    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem
	I0815 16:25:25.645746    3649 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem, removing ...
	I0815 16:25:25.645752    3649 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem
	I0815 16:25:25.645910    3649 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem (1082 bytes)
	I0815 16:25:25.646118    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem
	I0815 16:25:25.646164    3649 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem, removing ...
	I0815 16:25:25.646169    3649 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem
	I0815 16:25:25.646253    3649 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem (1123 bytes)
	I0815 16:25:25.646420    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem
	I0815 16:25:25.646496    3649 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem, removing ...
	I0815 16:25:25.646504    3649 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem
	I0815 16:25:25.646598    3649 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem (1679 bytes)
	I0815 16:25:25.646765    3649 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem org=jenkins.ha-138000-m02 san=[127.0.0.1 192.169.0.6 ha-138000-m02 localhost minikube]
	I0815 16:25:25.825658    3649 provision.go:177] copyRemoteCerts
	I0815 16:25:25.825707    3649 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 16:25:25.825722    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:25:25.825863    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:25:25.825953    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:25.826053    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:25:25.826140    3649 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/id_rsa Username:docker}
	I0815 16:25:25.862344    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0815 16:25:25.862417    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 16:25:25.882572    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0815 16:25:25.882639    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 16:25:25.902404    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0815 16:25:25.902470    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0815 16:25:25.922317    3649 provision.go:87] duration metric: took 277.0023ms to configureAuth
	I0815 16:25:25.922332    3649 buildroot.go:189] setting minikube options for container-runtime
	I0815 16:25:25.922512    3649 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:25:25.922526    3649 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:25:25.922660    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:25:25.922753    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:25:25.922847    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:25.922931    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:25.923029    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:25:25.923140    3649 main.go:141] libmachine: Using SSH client type: native
	I0815 16:25:25.923269    3649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x42caea0] 0x42cdc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0815 16:25:25.923277    3649 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0815 16:25:25.984805    3649 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0815 16:25:25.984816    3649 buildroot.go:70] root file system type: tmpfs
	I0815 16:25:25.984938    3649 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0815 16:25:25.984949    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:25:25.985083    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:25:25.985169    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:25.985249    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:25.985329    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:25:25.985450    3649 main.go:141] libmachine: Using SSH client type: native
	I0815 16:25:25.985607    3649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x42caea0] 0x42cdc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0815 16:25:25.985653    3649 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0815 16:25:26.056607    3649 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0815 16:25:26.056625    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:25:26.056761    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:25:26.056863    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:26.056957    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:26.057043    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:25:26.057179    3649 main.go:141] libmachine: Using SSH client type: native
	I0815 16:25:26.057326    3649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x42caea0] 0x42cdc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0815 16:25:26.057338    3649 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0815 16:25:27.732286    3649 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0815 16:25:27.732301    3649 machine.go:96] duration metric: took 37.294661422s to provisionDockerMachine
	I0815 16:25:27.732309    3649 start.go:293] postStartSetup for "ha-138000-m02" (driver="hyperkit")
	I0815 16:25:27.732317    3649 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 16:25:27.732327    3649 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:25:27.732516    3649 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 16:25:27.732528    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:25:27.732625    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:25:27.732731    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:27.732809    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:25:27.732896    3649 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/id_rsa Username:docker}
	I0815 16:25:27.769243    3649 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 16:25:27.772355    3649 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 16:25:27.772366    3649 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19452-977/.minikube/addons for local assets ...
	I0815 16:25:27.772467    3649 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19452-977/.minikube/files for local assets ...
	I0815 16:25:27.772656    3649 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> 14982.pem in /etc/ssl/certs
	I0815 16:25:27.772668    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> /etc/ssl/certs/14982.pem
	I0815 16:25:27.772873    3649 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 16:25:27.780868    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem --> /etc/ssl/certs/14982.pem (1708 bytes)
	I0815 16:25:27.799647    3649 start.go:296] duration metric: took 67.329668ms for postStartSetup
	I0815 16:25:27.799668    3649 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:25:27.799829    3649 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0815 16:25:27.799842    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:25:27.799928    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:25:27.800000    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:27.800074    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:25:27.800149    3649 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/id_rsa Username:docker}
	I0815 16:25:27.837218    3649 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0815 16:25:27.837277    3649 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0815 16:25:27.871532    3649 fix.go:56] duration metric: took 37.545710837s for fixHost
	I0815 16:25:27.871559    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:25:27.871714    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:25:27.871806    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:27.871884    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:27.871974    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:25:27.872101    3649 main.go:141] libmachine: Using SSH client type: native
	I0815 16:25:27.872250    3649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x42caea0] 0x42cdc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0815 16:25:27.872257    3649 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 16:25:27.932451    3649 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723764328.172025914
	
	I0815 16:25:27.932464    3649 fix.go:216] guest clock: 1723764328.172025914
	I0815 16:25:27.932470    3649 fix.go:229] Guest: 2024-08-15 16:25:28.172025914 -0700 PDT Remote: 2024-08-15 16:25:27.871549 -0700 PDT m=+56.674410917 (delta=300.476914ms)
	I0815 16:25:27.932480    3649 fix.go:200] guest clock delta is within tolerance: 300.476914ms
	I0815 16:25:27.932484    3649 start.go:83] releasing machines lock for "ha-138000-m02", held for 37.606689063s
	I0815 16:25:27.932502    3649 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:25:27.932640    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetIP
	I0815 16:25:27.955698    3649 out.go:177] * Found network options:
	I0815 16:25:27.976977    3649 out.go:177]   - NO_PROXY=192.169.0.5
	W0815 16:25:27.997880    3649 proxy.go:119] fail to check proxy env: Error ip not in block
	I0815 16:25:27.997916    3649 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:25:27.998743    3649 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:25:27.998959    3649 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:25:27.999062    3649 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 16:25:27.999103    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	W0815 16:25:27.999149    3649 proxy.go:119] fail to check proxy env: Error ip not in block
	I0815 16:25:27.999255    3649 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0815 16:25:27.999276    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:25:27.999310    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:25:27.999538    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:25:27.999567    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:27.999751    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:27.999778    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:25:27.999890    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:25:27.999915    3649 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/id_rsa Username:docker}
	I0815 16:25:28.000017    3649 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/id_rsa Username:docker}
	W0815 16:25:28.032774    3649 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 16:25:28.032832    3649 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 16:25:28.085200    3649 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 16:25:28.085222    3649 start.go:495] detecting cgroup driver to use...
	I0815 16:25:28.085337    3649 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 16:25:28.101256    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0815 16:25:28.110461    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0815 16:25:28.119610    3649 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0815 16:25:28.119671    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0815 16:25:28.128841    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 16:25:28.137598    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0815 16:25:28.146542    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 16:25:28.155343    3649 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 16:25:28.164400    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0815 16:25:28.173324    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0815 16:25:28.182447    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0815 16:25:28.191439    3649 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 16:25:28.199534    3649 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 16:25:28.207385    3649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:25:28.307256    3649 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0815 16:25:28.326701    3649 start.go:495] detecting cgroup driver to use...
	I0815 16:25:28.326772    3649 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0815 16:25:28.345963    3649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 16:25:28.361865    3649 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 16:25:28.380032    3649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 16:25:28.392583    3649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0815 16:25:28.403338    3649 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0815 16:25:28.425534    3649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0815 16:25:28.435952    3649 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 16:25:28.450826    3649 ssh_runner.go:195] Run: which cri-dockerd
	I0815 16:25:28.453880    3649 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0815 16:25:28.461213    3649 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0815 16:25:28.474603    3649 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0815 16:25:28.569552    3649 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0815 16:25:28.669486    3649 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0815 16:25:28.669508    3649 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0815 16:25:28.684315    3649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:25:28.789048    3649 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0815 16:26:29.810459    3649 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.021600349s)
	I0815 16:26:29.810528    3649 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0815 16:26:29.846420    3649 out.go:201] 
	W0815 16:26:29.868048    3649 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 15 23:25:25 ha-138000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 15 23:25:25 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:25.509065819Z" level=info msg="Starting up"
	Aug 15 23:25:25 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:25.509592997Z" level=info msg="containerd not running, starting managed containerd"
	Aug 15 23:25:25 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:25.510095236Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=519
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.527964893Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.542679991Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.542751629Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.542813012Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.542847466Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.542971116Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.543022892Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.543226251Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.543273769Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.543307918Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.543342764Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.543453732Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.543640009Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.545258649Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.545308637Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.545445977Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.545492906Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.545600399Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.545650460Z" level=info msg="metadata content store policy set" policy=shared
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.547717368Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.547830207Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.547884234Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.548013412Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.548060318Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.548127353Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.548391092Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.552607490Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.552725748Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.552840021Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.552885041Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.552918051Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.552984961Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553030860Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553064737Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553096185Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553126522Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553162873Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553202352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553233572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553266178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553297774Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553327631Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553357374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553386246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553418283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553450098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553484562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553517795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553547301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553576466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553607695Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553650178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553684928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553713941Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553789004Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553836418Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553870209Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553907631Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.554030910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.554116351Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.554162242Z" level=info msg="NRI interface is disabled by configuration."
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.554425646Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.554560798Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.554647146Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.554690019Z" level=info msg="containerd successfully booted in 0.027466s"
	Aug 15 23:25:26 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:26.539092962Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 15 23:25:26 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:26.579801466Z" level=info msg="Loading containers: start."
	Aug 15 23:25:26 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:26.753629817Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 15 23:25:27 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:27.897778336Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 15 23:25:27 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:27.941918967Z" level=info msg="Loading containers: done."
	Aug 15 23:25:27 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:27.949162882Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	Aug 15 23:25:27 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:27.949300191Z" level=info msg="Daemon has completed initialization"
	Aug 15 23:25:27 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:27.970294492Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 15 23:25:27 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:27.970499353Z" level=info msg="API listen on [::]:2376"
	Aug 15 23:25:27 ha-138000-m02 systemd[1]: Started Docker Application Container Engine.
	Aug 15 23:25:29 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:29.040016751Z" level=info msg="Processing signal 'terminated'"
	Aug 15 23:25:29 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:29.040919337Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 15 23:25:29 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:29.041235066Z" level=info msg="Daemon shutdown complete"
	Aug 15 23:25:29 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:29.041343453Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 15 23:25:29 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:29.041349896Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 15 23:25:29 ha-138000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Aug 15 23:25:30 ha-138000-m02 systemd[1]: docker.service: Deactivated successfully.
	Aug 15 23:25:30 ha-138000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Aug 15 23:25:30 ha-138000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 15 23:25:30 ha-138000-m02 dockerd[1088]: time="2024-08-15T23:25:30.078915638Z" level=info msg="Starting up"
	Aug 15 23:26:30 ha-138000-m02 dockerd[1088]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 15 23:26:30 ha-138000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 15 23:26:30 ha-138000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 15 23:26:30 ha-138000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 15 23:25:25 ha-138000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 15 23:25:25 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:25.509065819Z" level=info msg="Starting up"
	Aug 15 23:25:25 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:25.509592997Z" level=info msg="containerd not running, starting managed containerd"
	Aug 15 23:25:25 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:25.510095236Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=519
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.527964893Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.542679991Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.542751629Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.542813012Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.542847466Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.542971116Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.543022892Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.543226251Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.543273769Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.543307918Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.543342764Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.543453732Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.543640009Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.545258649Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.545308637Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.545445977Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.545492906Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.545600399Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.545650460Z" level=info msg="metadata content store policy set" policy=shared
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.547717368Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.547830207Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.547884234Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.548013412Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.548060318Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.548127353Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.548391092Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.552607490Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.552725748Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.552840021Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.552885041Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.552918051Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.552984961Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553030860Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553064737Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553096185Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553126522Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553162873Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553202352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553233572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553266178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553297774Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553327631Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553357374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553386246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553418283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553450098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553484562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553517795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553547301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553576466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553607695Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553650178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553684928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553713941Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553789004Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553836418Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553870209Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553907631Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.554030910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.554116351Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.554162242Z" level=info msg="NRI interface is disabled by configuration."
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.554425646Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.554560798Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.554647146Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.554690019Z" level=info msg="containerd successfully booted in 0.027466s"
	Aug 15 23:25:26 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:26.539092962Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 15 23:25:26 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:26.579801466Z" level=info msg="Loading containers: start."
	Aug 15 23:25:26 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:26.753629817Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 15 23:25:27 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:27.897778336Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 15 23:25:27 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:27.941918967Z" level=info msg="Loading containers: done."
	Aug 15 23:25:27 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:27.949162882Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	Aug 15 23:25:27 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:27.949300191Z" level=info msg="Daemon has completed initialization"
	Aug 15 23:25:27 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:27.970294492Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 15 23:25:27 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:27.970499353Z" level=info msg="API listen on [::]:2376"
	Aug 15 23:25:27 ha-138000-m02 systemd[1]: Started Docker Application Container Engine.
	Aug 15 23:25:29 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:29.040016751Z" level=info msg="Processing signal 'terminated'"
	Aug 15 23:25:29 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:29.040919337Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 15 23:25:29 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:29.041235066Z" level=info msg="Daemon shutdown complete"
	Aug 15 23:25:29 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:29.041343453Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 15 23:25:29 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:29.041349896Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 15 23:25:29 ha-138000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Aug 15 23:25:30 ha-138000-m02 systemd[1]: docker.service: Deactivated successfully.
	Aug 15 23:25:30 ha-138000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Aug 15 23:25:30 ha-138000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 15 23:25:30 ha-138000-m02 dockerd[1088]: time="2024-08-15T23:25:30.078915638Z" level=info msg="Starting up"
	Aug 15 23:26:30 ha-138000-m02 dockerd[1088]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 15 23:26:30 ha-138000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 15 23:26:30 ha-138000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 15 23:26:30 ha-138000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0815 16:26:29.868131    3649 out.go:270] * 
	* 
	W0815 16:26:29.869562    3649 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 16:26:29.930973    3649 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-amd64 node list -p ha-138000 -v=7 --alsologtostderr" : exit status 90
ha_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-138000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-138000 -n ha-138000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-138000 -n ha-138000: exit status 2 (152.129238ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-138000 logs -n 25: (2.281654914s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-138000 cp ha-138000-m03:/home/docker/cp-test.txt                                                                          | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m02:/home/docker/cp-test_ha-138000-m03_ha-138000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n                                                                                                             | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n ha-138000-m02 sudo cat                                                                                      | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | /home/docker/cp-test_ha-138000-m03_ha-138000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-138000 cp ha-138000-m03:/home/docker/cp-test.txt                                                                          | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m04:/home/docker/cp-test_ha-138000-m03_ha-138000-m04.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n                                                                                                             | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n ha-138000-m04 sudo cat                                                                                      | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | /home/docker/cp-test_ha-138000-m03_ha-138000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-138000 cp testdata/cp-test.txt                                                                                            | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n                                                                                                             | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-138000 cp ha-138000-m04:/home/docker/cp-test.txt                                                                          | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile1119105544/001/cp-test_ha-138000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n                                                                                                             | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-138000 cp ha-138000-m04:/home/docker/cp-test.txt                                                                          | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000:/home/docker/cp-test_ha-138000-m04_ha-138000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n                                                                                                             | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n ha-138000 sudo cat                                                                                          | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | /home/docker/cp-test_ha-138000-m04_ha-138000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-138000 cp ha-138000-m04:/home/docker/cp-test.txt                                                                          | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m02:/home/docker/cp-test_ha-138000-m04_ha-138000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n                                                                                                             | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n ha-138000-m02 sudo cat                                                                                      | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | /home/docker/cp-test_ha-138000-m04_ha-138000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-138000 cp ha-138000-m04:/home/docker/cp-test.txt                                                                          | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m03:/home/docker/cp-test_ha-138000-m04_ha-138000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n                                                                                                             | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n ha-138000-m03 sudo cat                                                                                      | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | /home/docker/cp-test_ha-138000-m04_ha-138000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-138000 node stop m02 -v=7                                                                                                 | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-138000 node start m02 -v=7                                                                                                | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:24 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-138000 -v=7                                                                                                       | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:24 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | -p ha-138000 -v=7                                                                                                            | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:24 PDT | 15 Aug 24 16:24 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-138000 --wait=true -v=7                                                                                                | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:24 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-138000                                                                                                            | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:26 PDT |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 16:24:31
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 16:24:31.233096    3649 out.go:345] Setting OutFile to fd 1 ...
	I0815 16:24:31.233281    3649 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:24:31.233287    3649 out.go:358] Setting ErrFile to fd 2...
	I0815 16:24:31.233290    3649 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:24:31.233463    3649 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-977/.minikube/bin
	I0815 16:24:31.234892    3649 out.go:352] Setting JSON to false
	I0815 16:24:31.259609    3649 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":1442,"bootTime":1723762829,"procs":422,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0815 16:24:31.259835    3649 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 16:24:31.281220    3649 out.go:177] * [ha-138000] minikube v1.33.1 on Darwin 14.6.1
	I0815 16:24:31.323339    3649 out.go:177]   - MINIKUBE_LOCATION=19452
	I0815 16:24:31.323394    3649 notify.go:220] Checking for updates...
	I0815 16:24:31.366134    3649 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19452-977/kubeconfig
	I0815 16:24:31.387302    3649 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0815 16:24:31.408076    3649 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 16:24:31.429265    3649 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-977/.minikube
	I0815 16:24:31.450282    3649 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 16:24:31.472864    3649 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:24:31.473038    3649 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 16:24:31.473723    3649 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:24:31.473802    3649 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:24:31.483475    3649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52024
	I0815 16:24:31.483866    3649 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:24:31.484264    3649 main.go:141] libmachine: Using API Version  1
	I0815 16:24:31.484274    3649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:24:31.484483    3649 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:24:31.484590    3649 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:24:31.513331    3649 out.go:177] * Using the hyperkit driver based on existing profile
	I0815 16:24:31.555013    3649 start.go:297] selected driver: hyperkit
	I0815 16:24:31.555040    3649 start.go:901] validating driver "hyperkit" against &{Name:ha-138000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-138000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:fals
e efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 16:24:31.555294    3649 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 16:24:31.555482    3649 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 16:24:31.555679    3649 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19452-977/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0815 16:24:31.565322    3649 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0815 16:24:31.570113    3649 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:24:31.570133    3649 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0815 16:24:31.573295    3649 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 16:24:31.573376    3649 cni.go:84] Creating CNI manager for ""
	I0815 16:24:31.573385    3649 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0815 16:24:31.573458    3649 start.go:340] cluster config:
	{Name:ha-138000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-138000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-t
iller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 16:24:31.573576    3649 iso.go:125] acquiring lock: {Name:mkb93fc66b60be15dffa9eb2999a4be8d8d4d218 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 16:24:31.616257    3649 out.go:177] * Starting "ha-138000" primary control-plane node in "ha-138000" cluster
	I0815 16:24:31.636985    3649 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 16:24:31.637060    3649 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19452-977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0815 16:24:31.637085    3649 cache.go:56] Caching tarball of preloaded images
	I0815 16:24:31.637273    3649 preload.go:172] Found /Users/jenkins/minikube-integration/19452-977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0815 16:24:31.637292    3649 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 16:24:31.637487    3649 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/config.json ...
	I0815 16:24:31.638371    3649 start.go:360] acquireMachinesLock for ha-138000: {Name:mkac8034c8a60617271f2754a03dcaf74fc4fef7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 16:24:31.638490    3649 start.go:364] duration metric: took 82.356µs to acquireMachinesLock for "ha-138000"
	I0815 16:24:31.638525    3649 start.go:96] Skipping create...Using existing machine configuration
	I0815 16:24:31.638544    3649 fix.go:54] fixHost starting: 
	I0815 16:24:31.638958    3649 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:24:31.639008    3649 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:24:31.648062    3649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52026
	I0815 16:24:31.648421    3649 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:24:31.648791    3649 main.go:141] libmachine: Using API Version  1
	I0815 16:24:31.648804    3649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:24:31.649022    3649 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:24:31.649142    3649 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:24:31.649278    3649 main.go:141] libmachine: (ha-138000) Calling .GetState
	I0815 16:24:31.649372    3649 main.go:141] libmachine: (ha-138000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:24:31.649446    3649 main.go:141] libmachine: (ha-138000) DBG | hyperkit pid from json: 3071
	I0815 16:24:31.650352    3649 main.go:141] libmachine: (ha-138000) DBG | hyperkit pid 3071 missing from process table
	I0815 16:24:31.650388    3649 fix.go:112] recreateIfNeeded on ha-138000: state=Stopped err=<nil>
	I0815 16:24:31.650403    3649 main.go:141] libmachine: (ha-138000) Calling .DriverName
	W0815 16:24:31.650489    3649 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 16:24:31.698042    3649 out.go:177] * Restarting existing hyperkit VM for "ha-138000" ...
	I0815 16:24:31.718584    3649 main.go:141] libmachine: (ha-138000) Calling .Start
	I0815 16:24:31.718879    3649 main.go:141] libmachine: (ha-138000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:24:31.718940    3649 main.go:141] libmachine: (ha-138000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/hyperkit.pid
	I0815 16:24:31.721002    3649 main.go:141] libmachine: (ha-138000) DBG | hyperkit pid 3071 missing from process table
	I0815 16:24:31.721020    3649 main.go:141] libmachine: (ha-138000) DBG | pid 3071 is in state "Stopped"
	I0815 16:24:31.721044    3649 main.go:141] libmachine: (ha-138000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/hyperkit.pid...
	I0815 16:24:31.721441    3649 main.go:141] libmachine: (ha-138000) DBG | Using UUID bf1b12d0-37a9-4c04-a028-0dd0a6dcd337
	I0815 16:24:31.829003    3649 main.go:141] libmachine: (ha-138000) DBG | Generated MAC 66:4d:cd:54:35:15
	I0815 16:24:31.829029    3649 main.go:141] libmachine: (ha-138000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000
	I0815 16:24:31.829133    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:31 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"bf1b12d0-37a9-4c04-a028-0dd0a6dcd337", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c24e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0815 16:24:31.829169    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:31 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"bf1b12d0-37a9-4c04-a028-0dd0a6dcd337", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c24e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0815 16:24:31.829203    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:31 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "bf1b12d0-37a9-4c04-a028-0dd0a6dcd337", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/ha-138000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/tty,log=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/bzimage,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/initrd,earlyprintk=serial l
oglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000"}
	I0815 16:24:31.829238    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:31 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U bf1b12d0-37a9-4c04-a028-0dd0a6dcd337 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/ha-138000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/tty,log=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/console-ring -f kexec,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/bzimage,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset noresto
re waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000"
	I0815 16:24:31.829247    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:31 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0815 16:24:31.830765    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:31 DEBUG: hyperkit: Pid is 3662
	I0815 16:24:31.831139    3649 main.go:141] libmachine: (ha-138000) DBG | Attempt 0
	I0815 16:24:31.831155    3649 main.go:141] libmachine: (ha-138000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:24:31.831242    3649 main.go:141] libmachine: (ha-138000) DBG | hyperkit pid from json: 3662
	I0815 16:24:31.832840    3649 main.go:141] libmachine: (ha-138000) DBG | Searching for 66:4d:cd:54:35:15 in /var/db/dhcpd_leases ...
	I0815 16:24:31.832917    3649 main.go:141] libmachine: (ha-138000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0815 16:24:31.832934    3649 main.go:141] libmachine: (ha-138000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be8e15}
	I0815 16:24:31.832943    3649 main.go:141] libmachine: (ha-138000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfdf74}
	I0815 16:24:31.832962    3649 main.go:141] libmachine: (ha-138000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfdedc}
	I0815 16:24:31.832970    3649 main.go:141] libmachine: (ha-138000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfde64}
	I0815 16:24:31.832977    3649 main.go:141] libmachine: (ha-138000) DBG | Found match: 66:4d:cd:54:35:15
	I0815 16:24:31.833028    3649 main.go:141] libmachine: (ha-138000) DBG | IP: 192.169.0.5
	I0815 16:24:31.833038    3649 main.go:141] libmachine: (ha-138000) Calling .GetConfigRaw
	I0815 16:24:31.833705    3649 main.go:141] libmachine: (ha-138000) Calling .GetIP
	I0815 16:24:31.833895    3649 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/config.json ...
	I0815 16:24:31.834359    3649 machine.go:93] provisionDockerMachine start ...
	I0815 16:24:31.834370    3649 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:24:31.834509    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:24:31.834611    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:24:31.834733    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:31.834881    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:31.834976    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:24:31.835114    3649 main.go:141] libmachine: Using SSH client type: native
	I0815 16:24:31.835296    3649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x42caea0] 0x42cdc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0815 16:24:31.835304    3649 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 16:24:31.838795    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:31 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0815 16:24:31.891055    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:31 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0815 16:24:31.891732    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0815 16:24:31.891746    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0815 16:24:31.891753    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0815 16:24:31.891763    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0815 16:24:32.275543    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:32 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0815 16:24:32.275556    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:32 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0815 16:24:32.390162    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0815 16:24:32.390181    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0815 16:24:32.390193    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0815 16:24:32.390217    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0815 16:24:32.391060    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:32 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0815 16:24:32.391070    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:32 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0815 16:24:37.953601    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:37 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0815 16:24:37.953741    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:37 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0815 16:24:37.953751    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:37 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0815 16:24:37.980241    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:37 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0815 16:24:42.910400    3649 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 16:24:42.910418    3649 main.go:141] libmachine: (ha-138000) Calling .GetMachineName
	I0815 16:24:42.910559    3649 buildroot.go:166] provisioning hostname "ha-138000"
	I0815 16:24:42.910571    3649 main.go:141] libmachine: (ha-138000) Calling .GetMachineName
	I0815 16:24:42.910673    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:24:42.910777    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:24:42.910859    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:42.910959    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:42.911045    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:24:42.911177    3649 main.go:141] libmachine: Using SSH client type: native
	I0815 16:24:42.911343    3649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x42caea0] 0x42cdc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0815 16:24:42.911352    3649 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-138000 && echo "ha-138000" | sudo tee /etc/hostname
	I0815 16:24:42.985179    3649 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-138000
	
	I0815 16:24:42.985199    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:24:42.985338    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:24:42.985446    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:42.985538    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:42.985614    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:24:42.985749    3649 main.go:141] libmachine: Using SSH client type: native
	I0815 16:24:42.985891    3649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x42caea0] 0x42cdc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0815 16:24:42.985905    3649 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-138000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-138000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-138000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 16:24:43.055472    3649 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 16:24:43.055491    3649 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19452-977/.minikube CaCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19452-977/.minikube}
	I0815 16:24:43.055508    3649 buildroot.go:174] setting up certificates
	I0815 16:24:43.055515    3649 provision.go:84] configureAuth start
	I0815 16:24:43.055522    3649 main.go:141] libmachine: (ha-138000) Calling .GetMachineName
	I0815 16:24:43.055669    3649 main.go:141] libmachine: (ha-138000) Calling .GetIP
	I0815 16:24:43.055769    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:24:43.055868    3649 provision.go:143] copyHostCerts
	I0815 16:24:43.055901    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem
	I0815 16:24:43.055963    3649 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem, removing ...
	I0815 16:24:43.055971    3649 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem
	I0815 16:24:43.056106    3649 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem (1082 bytes)
	I0815 16:24:43.056322    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem
	I0815 16:24:43.056353    3649 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem, removing ...
	I0815 16:24:43.056358    3649 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem
	I0815 16:24:43.056432    3649 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem (1123 bytes)
	I0815 16:24:43.056583    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem
	I0815 16:24:43.056611    3649 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem, removing ...
	I0815 16:24:43.056615    3649 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem
	I0815 16:24:43.056681    3649 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem (1679 bytes)
	I0815 16:24:43.056840    3649 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem org=jenkins.ha-138000 san=[127.0.0.1 192.169.0.5 ha-138000 localhost minikube]
	I0815 16:24:43.121501    3649 provision.go:177] copyRemoteCerts
	I0815 16:24:43.121552    3649 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 16:24:43.121568    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:24:43.121697    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:24:43.121782    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:43.121880    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:24:43.121971    3649 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/id_rsa Username:docker}
	I0815 16:24:43.165154    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0815 16:24:43.165236    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 16:24:43.200018    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0815 16:24:43.200092    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0815 16:24:43.220757    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0815 16:24:43.220829    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 16:24:43.240667    3649 provision.go:87] duration metric: took 185.141163ms to configureAuth
	I0815 16:24:43.240680    3649 buildroot.go:189] setting minikube options for container-runtime
	I0815 16:24:43.240849    3649 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:24:43.240863    3649 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:24:43.240998    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:24:43.241100    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:24:43.241183    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:43.241273    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:43.241367    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:24:43.241484    3649 main.go:141] libmachine: Using SSH client type: native
	I0815 16:24:43.241652    3649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x42caea0] 0x42cdc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0815 16:24:43.241660    3649 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0815 16:24:43.302884    3649 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0815 16:24:43.302897    3649 buildroot.go:70] root file system type: tmpfs
	I0815 16:24:43.302965    3649 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0815 16:24:43.302977    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:24:43.303108    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:24:43.303198    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:43.303278    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:43.303364    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:24:43.303495    3649 main.go:141] libmachine: Using SSH client type: native
	I0815 16:24:43.303638    3649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x42caea0] 0x42cdc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0815 16:24:43.303683    3649 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0815 16:24:43.378222    3649 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0815 16:24:43.378246    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:24:43.378382    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:24:43.378461    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:43.378563    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:43.378649    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:24:43.378787    3649 main.go:141] libmachine: Using SSH client type: native
	I0815 16:24:43.378932    3649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x42caea0] 0x42cdc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0815 16:24:43.378946    3649 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0815 16:24:45.080555    3649 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0815 16:24:45.080572    3649 machine.go:96] duration metric: took 13.246248166s to provisionDockerMachine
	I0815 16:24:45.080585    3649 start.go:293] postStartSetup for "ha-138000" (driver="hyperkit")
	I0815 16:24:45.080595    3649 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 16:24:45.080616    3649 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:24:45.080791    3649 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 16:24:45.080805    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:24:45.080908    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:24:45.080996    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:45.081081    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:24:45.081171    3649 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/id_rsa Username:docker}
	I0815 16:24:45.119742    3649 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 16:24:45.122978    3649 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 16:24:45.122994    3649 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19452-977/.minikube/addons for local assets ...
	I0815 16:24:45.123095    3649 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19452-977/.minikube/files for local assets ...
	I0815 16:24:45.123274    3649 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> 14982.pem in /etc/ssl/certs
	I0815 16:24:45.123280    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> /etc/ssl/certs/14982.pem
	I0815 16:24:45.123473    3649 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 16:24:45.130896    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem --> /etc/ssl/certs/14982.pem (1708 bytes)
	I0815 16:24:45.150554    3649 start.go:296] duration metric: took 69.960327ms for postStartSetup
	I0815 16:24:45.150578    3649 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:24:45.150756    3649 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0815 16:24:45.150769    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:24:45.150849    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:24:45.150943    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:45.151041    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:24:45.151122    3649 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/id_rsa Username:docker}
	I0815 16:24:45.187860    3649 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0815 16:24:45.187918    3649 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0815 16:24:45.240522    3649 fix.go:56] duration metric: took 13.602028125s for fixHost
	I0815 16:24:45.240543    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:24:45.240694    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:24:45.240782    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:45.240866    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:45.240953    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:24:45.241079    3649 main.go:141] libmachine: Using SSH client type: native
	I0815 16:24:45.241222    3649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x42caea0] 0x42cdc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0815 16:24:45.241230    3649 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 16:24:45.308498    3649 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723764285.546205797
	
	I0815 16:24:45.308509    3649 fix.go:216] guest clock: 1723764285.546205797
	I0815 16:24:45.308515    3649 fix.go:229] Guest: 2024-08-15 16:24:45.546205797 -0700 PDT Remote: 2024-08-15 16:24:45.240533 -0700 PDT m=+14.043250910 (delta=305.672797ms)
	I0815 16:24:45.308536    3649 fix.go:200] guest clock delta is within tolerance: 305.672797ms
	I0815 16:24:45.308540    3649 start.go:83] releasing machines lock for "ha-138000", held for 13.670085598s
	I0815 16:24:45.308562    3649 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:24:45.308691    3649 main.go:141] libmachine: (ha-138000) Calling .GetIP
	I0815 16:24:45.308815    3649 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:24:45.309125    3649 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:24:45.309228    3649 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:24:45.309333    3649 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 16:24:45.309348    3649 ssh_runner.go:195] Run: cat /version.json
	I0815 16:24:45.309359    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:24:45.309374    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:24:45.309454    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:24:45.309481    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:24:45.309570    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:45.309586    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:45.309666    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:24:45.309673    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:24:45.309753    3649 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/id_rsa Username:docker}
	I0815 16:24:45.309764    3649 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/id_rsa Username:docker}
	I0815 16:24:45.353596    3649 ssh_runner.go:195] Run: systemctl --version
	I0815 16:24:45.358729    3649 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 16:24:45.412525    3649 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 16:24:45.412627    3649 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 16:24:45.428066    3649 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 16:24:45.428077    3649 start.go:495] detecting cgroup driver to use...
	I0815 16:24:45.428183    3649 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 16:24:45.444602    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0815 16:24:45.453384    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0815 16:24:45.462134    3649 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0815 16:24:45.462180    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0815 16:24:45.470781    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 16:24:45.479385    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0815 16:24:45.487960    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 16:24:45.496691    3649 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 16:24:45.505669    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0815 16:24:45.514277    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0815 16:24:45.522851    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0815 16:24:45.531584    3649 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 16:24:45.539529    3649 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 16:24:45.547375    3649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:24:45.642699    3649 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0815 16:24:45.657803    3649 start.go:495] detecting cgroup driver to use...
	I0815 16:24:45.657881    3649 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0815 16:24:45.669244    3649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 16:24:45.680074    3649 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 16:24:45.692718    3649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 16:24:45.703066    3649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0815 16:24:45.713234    3649 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0815 16:24:45.735236    3649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0815 16:24:45.745677    3649 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 16:24:45.760852    3649 ssh_runner.go:195] Run: which cri-dockerd
	I0815 16:24:45.763929    3649 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0815 16:24:45.771021    3649 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0815 16:24:45.784172    3649 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0815 16:24:45.887215    3649 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0815 16:24:45.995634    3649 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0815 16:24:45.995716    3649 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0815 16:24:46.010389    3649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:24:46.126522    3649 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0815 16:24:48.464685    3649 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.338152009s)
	I0815 16:24:48.464761    3649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0815 16:24:48.475831    3649 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0815 16:24:48.490512    3649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0815 16:24:48.501692    3649 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0815 16:24:48.596754    3649 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0815 16:24:48.705379    3649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:24:48.807279    3649 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0815 16:24:48.821232    3649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0815 16:24:48.832145    3649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:24:48.931537    3649 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0815 16:24:48.994946    3649 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0815 16:24:48.995028    3649 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0815 16:24:48.999199    3649 start.go:563] Will wait 60s for crictl version
	I0815 16:24:48.999246    3649 ssh_runner.go:195] Run: which crictl
	I0815 16:24:49.002242    3649 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 16:24:49.031023    3649 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0815 16:24:49.031095    3649 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0815 16:24:49.049391    3649 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0815 16:24:49.110204    3649 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0815 16:24:49.110253    3649 main.go:141] libmachine: (ha-138000) Calling .GetIP
	I0815 16:24:49.110630    3649 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0815 16:24:49.114885    3649 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 16:24:49.125317    3649 kubeadm.go:883] updating cluster {Name:ha-138000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:ha-138000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false f
reshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 16:24:49.125409    3649 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 16:24:49.125461    3649 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0815 16:24:49.138389    3649 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0815 16:24:49.138400    3649 docker.go:615] Images already preloaded, skipping extraction
	I0815 16:24:49.138469    3649 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0815 16:24:49.152217    3649 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0815 16:24:49.152236    3649 cache_images.go:84] Images are preloaded, skipping loading
	I0815 16:24:49.152245    3649 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.0 docker true true} ...
	I0815 16:24:49.152316    3649 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-138000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-138000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 16:24:49.152387    3649 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0815 16:24:49.188207    3649 cni.go:84] Creating CNI manager for ""
	I0815 16:24:49.188219    3649 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0815 16:24:49.188233    3649 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 16:24:49.188247    3649 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-138000 NodeName:ha-138000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 16:24:49.188328    3649 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-138000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 16:24:49.188340    3649 kube-vip.go:115] generating kube-vip config ...
	I0815 16:24:49.188395    3649 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0815 16:24:49.201717    3649 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0815 16:24:49.201810    3649 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0815 16:24:49.201860    3649 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 16:24:49.210773    3649 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 16:24:49.210821    3649 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0815 16:24:49.218705    3649 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0815 16:24:49.232092    3649 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 16:24:49.245488    3649 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0815 16:24:49.259182    3649 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0815 16:24:49.272667    3649 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0815 16:24:49.275463    3649 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 16:24:49.285341    3649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:24:49.379165    3649 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 16:24:49.393690    3649 certs.go:68] Setting up /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000 for IP: 192.169.0.5
	I0815 16:24:49.393701    3649 certs.go:194] generating shared ca certs ...
	I0815 16:24:49.393711    3649 certs.go:226] acquiring lock for ca certs: {Name:mka1c88019f6064d2983ac988db71a67aeb65696 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:24:49.393886    3649 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.key
	I0815 16:24:49.393940    3649 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.key
	I0815 16:24:49.393952    3649 certs.go:256] generating profile certs ...
	I0815 16:24:49.394054    3649 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/client.key
	I0815 16:24:49.394074    3649 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.key.7af4c91a
	I0815 16:24:49.394091    3649 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.crt.7af4c91a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.7 192.169.0.254]
	I0815 16:24:49.771714    3649 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.crt.7af4c91a ...
	I0815 16:24:49.771738    3649 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.crt.7af4c91a: {Name:mkfdf96fafb98f174dadc5b6379869463c2a6ca3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:24:49.772085    3649 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.key.7af4c91a ...
	I0815 16:24:49.772094    3649 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.key.7af4c91a: {Name:mk0c2b233ae670508e502baf145f82fc5c8af979 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:24:49.772311    3649 certs.go:381] copying /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.crt.7af4c91a -> /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.crt
	I0815 16:24:49.772506    3649 certs.go:385] copying /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.key.7af4c91a -> /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.key
	I0815 16:24:49.772728    3649 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.key
	I0815 16:24:49.772737    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0815 16:24:49.772760    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0815 16:24:49.772779    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0815 16:24:49.772798    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0815 16:24:49.772818    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0815 16:24:49.772836    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0815 16:24:49.772855    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0815 16:24:49.772873    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0815 16:24:49.772972    3649 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498.pem (1338 bytes)
	W0815 16:24:49.773012    3649 certs.go:480] ignoring /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498_empty.pem, impossibly tiny 0 bytes
	I0815 16:24:49.773021    3649 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 16:24:49.773066    3649 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem (1082 bytes)
	I0815 16:24:49.773106    3649 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem (1123 bytes)
	I0815 16:24:49.773135    3649 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem (1679 bytes)
	I0815 16:24:49.773201    3649 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem (1708 bytes)
	I0815 16:24:49.773235    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> /usr/share/ca-certificates/14982.pem
	I0815 16:24:49.773257    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:24:49.773276    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498.pem -> /usr/share/ca-certificates/1498.pem
	I0815 16:24:49.773761    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 16:24:49.799905    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 16:24:49.819857    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 16:24:49.839446    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 16:24:49.859479    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0815 16:24:49.878979    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 16:24:49.898857    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 16:24:49.918488    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 16:24:49.938289    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem --> /usr/share/ca-certificates/14982.pem (1708 bytes)
	I0815 16:24:49.958067    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 16:24:49.977508    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498.pem --> /usr/share/ca-certificates/1498.pem (1338 bytes)
	I0815 16:24:49.997111    3649 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 16:24:50.010415    3649 ssh_runner.go:195] Run: openssl version
	I0815 16:24:50.014564    3649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14982.pem && ln -fs /usr/share/ca-certificates/14982.pem /etc/ssl/certs/14982.pem"
	I0815 16:24:50.022762    3649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14982.pem
	I0815 16:24:50.025974    3649 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 23:14 /usr/share/ca-certificates/14982.pem
	I0815 16:24:50.026012    3649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14982.pem
	I0815 16:24:50.030247    3649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14982.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 16:24:50.038688    3649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 16:24:50.046935    3649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:24:50.050205    3649 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:05 /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:24:50.050240    3649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:24:50.054437    3649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 16:24:50.062668    3649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1498.pem && ln -fs /usr/share/ca-certificates/1498.pem /etc/ssl/certs/1498.pem"
	I0815 16:24:50.070835    3649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1498.pem
	I0815 16:24:50.074144    3649 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 23:14 /usr/share/ca-certificates/1498.pem
	I0815 16:24:50.074179    3649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1498.pem
	I0815 16:24:50.078407    3649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1498.pem /etc/ssl/certs/51391683.0"
	I0815 16:24:50.087458    3649 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 16:24:50.090800    3649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 16:24:50.095300    3649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 16:24:50.099454    3649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 16:24:50.104181    3649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 16:24:50.108451    3649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 16:24:50.112679    3649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 16:24:50.116963    3649 kubeadm.go:392] StartCluster: {Name:ha-138000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:ha-138000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 16:24:50.117082    3649 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0815 16:24:50.130554    3649 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 16:24:50.137992    3649 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 16:24:50.138004    3649 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 16:24:50.138048    3649 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 16:24:50.145558    3649 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 16:24:50.145859    3649 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-138000" does not appear in /Users/jenkins/minikube-integration/19452-977/kubeconfig
	I0815 16:24:50.145940    3649 kubeconfig.go:62] /Users/jenkins/minikube-integration/19452-977/kubeconfig needs updating (will repair): [kubeconfig missing "ha-138000" cluster setting kubeconfig missing "ha-138000" context setting]
	I0815 16:24:50.146137    3649 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-977/kubeconfig: {Name:mk0f04b0199f84dc20eb294d5b790451b41e43fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:24:50.146558    3649 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19452-977/kubeconfig
	I0815 16:24:50.146752    3649 kapi.go:59] client config for ha-138000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/client.key", CAFile:"/Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Use
rAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x5983f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0815 16:24:50.147060    3649 cert_rotation.go:140] Starting client certificate rotation controller
	I0815 16:24:50.147235    3649 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 16:24:50.154308    3649 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0815 16:24:50.154320    3649 kubeadm.go:597] duration metric: took 16.312125ms to restartPrimaryControlPlane
	I0815 16:24:50.154325    3649 kubeadm.go:394] duration metric: took 37.367941ms to StartCluster
	I0815 16:24:50.154333    3649 settings.go:142] acquiring lock: {Name:mk694dad19d37394fa6b13c51a7dc54b62e97c12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:24:50.154408    3649 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19452-977/kubeconfig
	I0815 16:24:50.154767    3649 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-977/kubeconfig: {Name:mk0f04b0199f84dc20eb294d5b790451b41e43fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:24:50.154992    3649 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 16:24:50.155005    3649 start.go:241] waiting for startup goroutines ...
	I0815 16:24:50.155016    3649 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 16:24:50.155148    3649 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:24:50.196433    3649 out.go:177] * Enabled addons: 
	I0815 16:24:50.217474    3649 addons.go:510] duration metric: took 62.454726ms for enable addons: enabled=[]
	I0815 16:24:50.217512    3649 start.go:246] waiting for cluster config update ...
	I0815 16:24:50.217524    3649 start.go:255] writing updated cluster config ...
	I0815 16:24:50.239613    3649 out.go:201] 
	I0815 16:24:50.260810    3649 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:24:50.260937    3649 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/config.json ...
	I0815 16:24:50.282712    3649 out.go:177] * Starting "ha-138000-m02" control-plane node in "ha-138000" cluster
	I0815 16:24:50.324521    3649 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 16:24:50.324584    3649 cache.go:56] Caching tarball of preloaded images
	I0815 16:24:50.324754    3649 preload.go:172] Found /Users/jenkins/minikube-integration/19452-977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0815 16:24:50.324772    3649 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 16:24:50.324901    3649 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/config.json ...
	I0815 16:24:50.325802    3649 start.go:360] acquireMachinesLock for ha-138000-m02: {Name:mkac8034c8a60617271f2754a03dcaf74fc4fef7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 16:24:50.325911    3649 start.go:364] duration metric: took 84.439µs to acquireMachinesLock for "ha-138000-m02"
	I0815 16:24:50.325938    3649 start.go:96] Skipping create...Using existing machine configuration
	I0815 16:24:50.325946    3649 fix.go:54] fixHost starting: m02
	I0815 16:24:50.326424    3649 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:24:50.326451    3649 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:24:50.335682    3649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52048
	I0815 16:24:50.336051    3649 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:24:50.336443    3649 main.go:141] libmachine: Using API Version  1
	I0815 16:24:50.336459    3649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:24:50.336675    3649 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:24:50.336791    3649 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:24:50.336888    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetState
	I0815 16:24:50.336961    3649 main.go:141] libmachine: (ha-138000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:24:50.337044    3649 main.go:141] libmachine: (ha-138000-m02) DBG | hyperkit pid from json: 3600
	I0815 16:24:50.337930    3649 main.go:141] libmachine: (ha-138000-m02) DBG | hyperkit pid 3600 missing from process table
	I0815 16:24:50.337962    3649 fix.go:112] recreateIfNeeded on ha-138000-m02: state=Stopped err=<nil>
	I0815 16:24:50.337972    3649 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	W0815 16:24:50.338053    3649 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 16:24:50.379676    3649 out.go:177] * Restarting existing hyperkit VM for "ha-138000-m02" ...
	I0815 16:24:50.400415    3649 main.go:141] libmachine: (ha-138000-m02) Calling .Start
	I0815 16:24:50.400691    3649 main.go:141] libmachine: (ha-138000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:24:50.400747    3649 main.go:141] libmachine: (ha-138000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/hyperkit.pid
	I0815 16:24:50.402488    3649 main.go:141] libmachine: (ha-138000-m02) DBG | hyperkit pid 3600 missing from process table
	I0815 16:24:50.402502    3649 main.go:141] libmachine: (ha-138000-m02) DBG | pid 3600 is in state "Stopped"
	I0815 16:24:50.402518    3649 main.go:141] libmachine: (ha-138000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/hyperkit.pid...
	I0815 16:24:50.402857    3649 main.go:141] libmachine: (ha-138000-m02) DBG | Using UUID 4cff9b5a-9fe3-4215-9139-05f05b79bce3
	I0815 16:24:50.432166    3649 main.go:141] libmachine: (ha-138000-m02) DBG | Generated MAC 9a:c2:e9:d7:1c:58
	I0815 16:24:50.432194    3649 main.go:141] libmachine: (ha-138000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000
	I0815 16:24:50.432283    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"4cff9b5a-9fe3-4215-9139-05f05b79bce3", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003b06c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0815 16:24:50.432316    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"4cff9b5a-9fe3-4215-9139-05f05b79bce3", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003b06c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0815 16:24:50.432360    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "4cff9b5a-9fe3-4215-9139-05f05b79bce3", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/ha-138000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/tty,log=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/bzimage,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-13
8000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000"}
	I0815 16:24:50.432400    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 4cff9b5a-9fe3-4215-9139-05f05b79bce3 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/ha-138000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/tty,log=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/bzimage,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 co
nsole=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000"
	I0815 16:24:50.432410    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0815 16:24:50.433800    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 DEBUG: hyperkit: Pid is 3670
	I0815 16:24:50.434270    3649 main.go:141] libmachine: (ha-138000-m02) DBG | Attempt 0
	I0815 16:24:50.434284    3649 main.go:141] libmachine: (ha-138000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:24:50.434361    3649 main.go:141] libmachine: (ha-138000-m02) DBG | hyperkit pid from json: 3670
	I0815 16:24:50.436313    3649 main.go:141] libmachine: (ha-138000-m02) DBG | Searching for 9a:c2:e9:d7:1c:58 in /var/db/dhcpd_leases ...
	I0815 16:24:50.436365    3649 main.go:141] libmachine: (ha-138000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0815 16:24:50.436381    3649 main.go:141] libmachine: (ha-138000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfdfb9}
	I0815 16:24:50.436395    3649 main.go:141] libmachine: (ha-138000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be8e15}
	I0815 16:24:50.436408    3649 main.go:141] libmachine: (ha-138000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfdf74}
	I0815 16:24:50.436429    3649 main.go:141] libmachine: (ha-138000-m02) DBG | Found match: 9a:c2:e9:d7:1c:58
	I0815 16:24:50.436463    3649 main.go:141] libmachine: (ha-138000-m02) DBG | IP: 192.169.0.6
	I0815 16:24:50.436476    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetConfigRaw
	I0815 16:24:50.437131    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetIP
	I0815 16:24:50.437308    3649 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/config.json ...
	I0815 16:24:50.437758    3649 machine.go:93] provisionDockerMachine start ...
	I0815 16:24:50.437768    3649 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:24:50.437887    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:24:50.437997    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:24:50.438094    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:24:50.438199    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:24:50.438287    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:24:50.438398    3649 main.go:141] libmachine: Using SSH client type: native
	I0815 16:24:50.438546    3649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x42caea0] 0x42cdc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0815 16:24:50.438554    3649 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 16:24:50.441514    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0815 16:24:50.450166    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0815 16:24:50.451006    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0815 16:24:50.451024    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0815 16:24:50.451053    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0815 16:24:50.451081    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0815 16:24:50.836828    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0815 16:24:50.836848    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0815 16:24:50.951307    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0815 16:24:50.951325    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0815 16:24:50.951354    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0815 16:24:50.951377    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0815 16:24:50.952254    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0815 16:24:50.952268    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0815 16:24:56.551926    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:56 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0815 16:24:56.551945    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:56 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0815 16:24:56.551957    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:56 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0815 16:24:56.576187    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:56 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0815 16:25:25.506687    3649 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 16:25:25.506701    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetMachineName
	I0815 16:25:25.506833    3649 buildroot.go:166] provisioning hostname "ha-138000-m02"
	I0815 16:25:25.506845    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetMachineName
	I0815 16:25:25.506942    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:25:25.507027    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:25:25.507110    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:25.507196    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:25.507274    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:25:25.507413    3649 main.go:141] libmachine: Using SSH client type: native
	I0815 16:25:25.507576    3649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x42caea0] 0x42cdc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0815 16:25:25.507586    3649 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-138000-m02 && echo "ha-138000-m02" | sudo tee /etc/hostname
	I0815 16:25:25.578727    3649 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-138000-m02
	
	I0815 16:25:25.578742    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:25:25.578877    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:25:25.578967    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:25.579045    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:25.579129    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:25:25.579269    3649 main.go:141] libmachine: Using SSH client type: native
	I0815 16:25:25.579419    3649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x42caea0] 0x42cdc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0815 16:25:25.579432    3649 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-138000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-138000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-138000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 16:25:25.645270    3649 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 16:25:25.645285    3649 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19452-977/.minikube CaCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19452-977/.minikube}
	I0815 16:25:25.645301    3649 buildroot.go:174] setting up certificates
	I0815 16:25:25.645307    3649 provision.go:84] configureAuth start
	I0815 16:25:25.645342    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetMachineName
	I0815 16:25:25.645472    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetIP
	I0815 16:25:25.645569    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:25:25.645659    3649 provision.go:143] copyHostCerts
	I0815 16:25:25.645686    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem
	I0815 16:25:25.645746    3649 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem, removing ...
	I0815 16:25:25.645752    3649 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem
	I0815 16:25:25.645910    3649 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem (1082 bytes)
	I0815 16:25:25.646118    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem
	I0815 16:25:25.646164    3649 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem, removing ...
	I0815 16:25:25.646169    3649 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem
	I0815 16:25:25.646253    3649 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem (1123 bytes)
	I0815 16:25:25.646420    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem
	I0815 16:25:25.646496    3649 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem, removing ...
	I0815 16:25:25.646504    3649 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem
	I0815 16:25:25.646598    3649 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem (1679 bytes)
	I0815 16:25:25.646765    3649 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem org=jenkins.ha-138000-m02 san=[127.0.0.1 192.169.0.6 ha-138000-m02 localhost minikube]
	I0815 16:25:25.825658    3649 provision.go:177] copyRemoteCerts
	I0815 16:25:25.825707    3649 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 16:25:25.825722    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:25:25.825863    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:25:25.825953    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:25.826053    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:25:25.826140    3649 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/id_rsa Username:docker}
	I0815 16:25:25.862344    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0815 16:25:25.862417    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 16:25:25.882572    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0815 16:25:25.882639    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 16:25:25.902404    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0815 16:25:25.902470    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0815 16:25:25.922317    3649 provision.go:87] duration metric: took 277.0023ms to configureAuth
	I0815 16:25:25.922332    3649 buildroot.go:189] setting minikube options for container-runtime
	I0815 16:25:25.922512    3649 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:25:25.922526    3649 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:25:25.922660    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:25:25.922753    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:25:25.922847    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:25.922931    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:25.923029    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:25:25.923140    3649 main.go:141] libmachine: Using SSH client type: native
	I0815 16:25:25.923269    3649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x42caea0] 0x42cdc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0815 16:25:25.923277    3649 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0815 16:25:25.984805    3649 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0815 16:25:25.984816    3649 buildroot.go:70] root file system type: tmpfs
	I0815 16:25:25.984938    3649 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0815 16:25:25.984949    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:25:25.985083    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:25:25.985169    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:25.985249    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:25.985329    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:25:25.985450    3649 main.go:141] libmachine: Using SSH client type: native
	I0815 16:25:25.985607    3649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x42caea0] 0x42cdc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0815 16:25:25.985653    3649 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0815 16:25:26.056607    3649 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0815 16:25:26.056625    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:25:26.056761    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:25:26.056863    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:26.056957    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:26.057043    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:25:26.057179    3649 main.go:141] libmachine: Using SSH client type: native
	I0815 16:25:26.057326    3649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x42caea0] 0x42cdc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0815 16:25:26.057338    3649 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0815 16:25:27.732286    3649 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0815 16:25:27.732301    3649 machine.go:96] duration metric: took 37.294661422s to provisionDockerMachine
	I0815 16:25:27.732309    3649 start.go:293] postStartSetup for "ha-138000-m02" (driver="hyperkit")
	I0815 16:25:27.732317    3649 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 16:25:27.732327    3649 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:25:27.732516    3649 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 16:25:27.732528    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:25:27.732625    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:25:27.732731    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:27.732809    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:25:27.732896    3649 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/id_rsa Username:docker}
	I0815 16:25:27.769243    3649 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 16:25:27.772355    3649 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 16:25:27.772366    3649 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19452-977/.minikube/addons for local assets ...
	I0815 16:25:27.772467    3649 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19452-977/.minikube/files for local assets ...
	I0815 16:25:27.772656    3649 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> 14982.pem in /etc/ssl/certs
	I0815 16:25:27.772668    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> /etc/ssl/certs/14982.pem
	I0815 16:25:27.772873    3649 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 16:25:27.780868    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem --> /etc/ssl/certs/14982.pem (1708 bytes)
	I0815 16:25:27.799647    3649 start.go:296] duration metric: took 67.329668ms for postStartSetup
	I0815 16:25:27.799668    3649 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:25:27.799829    3649 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0815 16:25:27.799842    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:25:27.799928    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:25:27.800000    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:27.800074    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:25:27.800149    3649 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/id_rsa Username:docker}
	I0815 16:25:27.837218    3649 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0815 16:25:27.837277    3649 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0815 16:25:27.871532    3649 fix.go:56] duration metric: took 37.545710837s for fixHost
	I0815 16:25:27.871559    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:25:27.871714    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:25:27.871806    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:27.871884    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:27.871974    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:25:27.872101    3649 main.go:141] libmachine: Using SSH client type: native
	I0815 16:25:27.872250    3649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x42caea0] 0x42cdc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0815 16:25:27.872257    3649 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 16:25:27.932451    3649 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723764328.172025914
	
	I0815 16:25:27.932464    3649 fix.go:216] guest clock: 1723764328.172025914
	I0815 16:25:27.932470    3649 fix.go:229] Guest: 2024-08-15 16:25:28.172025914 -0700 PDT Remote: 2024-08-15 16:25:27.871549 -0700 PDT m=+56.674410917 (delta=300.476914ms)
	I0815 16:25:27.932480    3649 fix.go:200] guest clock delta is within tolerance: 300.476914ms
	I0815 16:25:27.932484    3649 start.go:83] releasing machines lock for "ha-138000-m02", held for 37.606689063s
	I0815 16:25:27.932502    3649 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:25:27.932640    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetIP
	I0815 16:25:27.955698    3649 out.go:177] * Found network options:
	I0815 16:25:27.976977    3649 out.go:177]   - NO_PROXY=192.169.0.5
	W0815 16:25:27.997880    3649 proxy.go:119] fail to check proxy env: Error ip not in block
	I0815 16:25:27.997916    3649 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:25:27.998743    3649 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:25:27.998959    3649 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:25:27.999062    3649 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 16:25:27.999103    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	W0815 16:25:27.999149    3649 proxy.go:119] fail to check proxy env: Error ip not in block
	I0815 16:25:27.999255    3649 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0815 16:25:27.999276    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:25:27.999310    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:25:27.999538    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:25:27.999567    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:27.999751    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:27.999778    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:25:27.999890    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:25:27.999915    3649 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/id_rsa Username:docker}
	I0815 16:25:28.000017    3649 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/id_rsa Username:docker}
	W0815 16:25:28.032774    3649 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 16:25:28.032832    3649 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 16:25:28.085200    3649 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 16:25:28.085222    3649 start.go:495] detecting cgroup driver to use...
	I0815 16:25:28.085337    3649 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 16:25:28.101256    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0815 16:25:28.110461    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0815 16:25:28.119610    3649 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0815 16:25:28.119671    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0815 16:25:28.128841    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 16:25:28.137598    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0815 16:25:28.146542    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 16:25:28.155343    3649 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 16:25:28.164400    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0815 16:25:28.173324    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0815 16:25:28.182447    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0815 16:25:28.191439    3649 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 16:25:28.199534    3649 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 16:25:28.207385    3649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:25:28.307256    3649 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0815 16:25:28.326701    3649 start.go:495] detecting cgroup driver to use...
	I0815 16:25:28.326772    3649 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0815 16:25:28.345963    3649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 16:25:28.361865    3649 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 16:25:28.380032    3649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 16:25:28.392583    3649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0815 16:25:28.403338    3649 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0815 16:25:28.425534    3649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0815 16:25:28.435952    3649 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 16:25:28.450826    3649 ssh_runner.go:195] Run: which cri-dockerd
	I0815 16:25:28.453880    3649 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0815 16:25:28.461213    3649 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0815 16:25:28.474603    3649 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0815 16:25:28.569552    3649 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0815 16:25:28.669486    3649 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0815 16:25:28.669508    3649 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0815 16:25:28.684315    3649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:25:28.789048    3649 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0815 16:26:29.810459    3649 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.021600349s)
	I0815 16:26:29.810528    3649 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0815 16:26:29.846420    3649 out.go:201] 
	W0815 16:26:29.868048    3649 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 15 23:25:25 ha-138000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 15 23:25:25 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:25.509065819Z" level=info msg="Starting up"
	Aug 15 23:25:25 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:25.509592997Z" level=info msg="containerd not running, starting managed containerd"
	Aug 15 23:25:25 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:25.510095236Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=519
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.527964893Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.542679991Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.542751629Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.542813012Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.542847466Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.542971116Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.543022892Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.543226251Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.543273769Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.543307918Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.543342764Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.543453732Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.543640009Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.545258649Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.545308637Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.545445977Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.545492906Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.545600399Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.545650460Z" level=info msg="metadata content store policy set" policy=shared
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.547717368Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.547830207Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.547884234Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.548013412Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.548060318Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.548127353Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.548391092Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.552607490Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.552725748Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.552840021Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.552885041Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.552918051Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.552984961Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553030860Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553064737Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553096185Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553126522Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553162873Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553202352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553233572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553266178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553297774Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553327631Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553357374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553386246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553418283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553450098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553484562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553517795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553547301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553576466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553607695Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553650178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553684928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553713941Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553789004Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553836418Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553870209Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553907631Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.554030910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.554116351Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.554162242Z" level=info msg="NRI interface is disabled by configuration."
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.554425646Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.554560798Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.554647146Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.554690019Z" level=info msg="containerd successfully booted in 0.027466s"
	Aug 15 23:25:26 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:26.539092962Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 15 23:25:26 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:26.579801466Z" level=info msg="Loading containers: start."
	Aug 15 23:25:26 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:26.753629817Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 15 23:25:27 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:27.897778336Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 15 23:25:27 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:27.941918967Z" level=info msg="Loading containers: done."
	Aug 15 23:25:27 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:27.949162882Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	Aug 15 23:25:27 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:27.949300191Z" level=info msg="Daemon has completed initialization"
	Aug 15 23:25:27 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:27.970294492Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 15 23:25:27 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:27.970499353Z" level=info msg="API listen on [::]:2376"
	Aug 15 23:25:27 ha-138000-m02 systemd[1]: Started Docker Application Container Engine.
	Aug 15 23:25:29 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:29.040016751Z" level=info msg="Processing signal 'terminated'"
	Aug 15 23:25:29 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:29.040919337Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 15 23:25:29 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:29.041235066Z" level=info msg="Daemon shutdown complete"
	Aug 15 23:25:29 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:29.041343453Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 15 23:25:29 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:29.041349896Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 15 23:25:29 ha-138000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Aug 15 23:25:30 ha-138000-m02 systemd[1]: docker.service: Deactivated successfully.
	Aug 15 23:25:30 ha-138000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Aug 15 23:25:30 ha-138000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 15 23:25:30 ha-138000-m02 dockerd[1088]: time="2024-08-15T23:25:30.078915638Z" level=info msg="Starting up"
	Aug 15 23:26:30 ha-138000-m02 dockerd[1088]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 15 23:26:30 ha-138000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 15 23:26:30 ha-138000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 15 23:26:30 ha-138000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0815 16:26:29.868131    3649 out.go:270] * 
	W0815 16:26:29.869562    3649 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 16:26:29.930973    3649 out.go:201] 
	
	
	==> Docker <==
	Aug 15 23:25:17 ha-138000 dockerd[1164]: time="2024-08-15T23:25:17.978133369Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 15 23:25:18 ha-138000 dockerd[1157]: time="2024-08-15T23:25:18.986360510Z" level=info msg="ignoring event" container=1f4461ece73e6b17b4da653e21e9ba3a76d561a690ded8920211ea452e758a54 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 15 23:25:18 ha-138000 dockerd[1164]: time="2024-08-15T23:25:18.986525796Z" level=info msg="shim disconnected" id=1f4461ece73e6b17b4da653e21e9ba3a76d561a690ded8920211ea452e758a54 namespace=moby
	Aug 15 23:25:18 ha-138000 dockerd[1164]: time="2024-08-15T23:25:18.986632604Z" level=warning msg="cleaning up after shim disconnected" id=1f4461ece73e6b17b4da653e21e9ba3a76d561a690ded8920211ea452e758a54 namespace=moby
	Aug 15 23:25:18 ha-138000 dockerd[1164]: time="2024-08-15T23:25:18.986640561Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 15 23:25:31 ha-138000 dockerd[1164]: time="2024-08-15T23:25:31.964662511Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 15 23:25:31 ha-138000 dockerd[1164]: time="2024-08-15T23:25:31.964740914Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 15 23:25:31 ha-138000 dockerd[1164]: time="2024-08-15T23:25:31.965171776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:25:31 ha-138000 dockerd[1164]: time="2024-08-15T23:25:31.965294044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:25:38 ha-138000 dockerd[1164]: time="2024-08-15T23:25:38.961042551Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 15 23:25:38 ha-138000 dockerd[1164]: time="2024-08-15T23:25:38.961174519Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 15 23:25:38 ha-138000 dockerd[1164]: time="2024-08-15T23:25:38.961206514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:25:38 ha-138000 dockerd[1164]: time="2024-08-15T23:25:38.961334482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:25:59 ha-138000 dockerd[1157]: time="2024-08-15T23:25:59.760572159Z" level=info msg="ignoring event" container=4745e33319a09d9bd5f6d9af75281ffc2c6681d2c3666cbb617dc23792e131ae module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 15 23:25:59 ha-138000 dockerd[1164]: time="2024-08-15T23:25:59.761266003Z" level=info msg="shim disconnected" id=4745e33319a09d9bd5f6d9af75281ffc2c6681d2c3666cbb617dc23792e131ae namespace=moby
	Aug 15 23:25:59 ha-138000 dockerd[1164]: time="2024-08-15T23:25:59.761315524Z" level=warning msg="cleaning up after shim disconnected" id=4745e33319a09d9bd5f6d9af75281ffc2c6681d2c3666cbb617dc23792e131ae namespace=moby
	Aug 15 23:25:59 ha-138000 dockerd[1164]: time="2024-08-15T23:25:59.761324344Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 15 23:25:59 ha-138000 dockerd[1157]: time="2024-08-15T23:25:59.792363842Z" level=info msg="ignoring event" container=b9045283b928de777b7f4d15d6de8027f3786562c871977d957e2dd9d2a95b29 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 15 23:25:59 ha-138000 dockerd[1164]: time="2024-08-15T23:25:59.792820549Z" level=info msg="shim disconnected" id=b9045283b928de777b7f4d15d6de8027f3786562c871977d957e2dd9d2a95b29 namespace=moby
	Aug 15 23:25:59 ha-138000 dockerd[1164]: time="2024-08-15T23:25:59.792922467Z" level=warning msg="cleaning up after shim disconnected" id=b9045283b928de777b7f4d15d6de8027f3786562c871977d957e2dd9d2a95b29 namespace=moby
	Aug 15 23:25:59 ha-138000 dockerd[1164]: time="2024-08-15T23:25:59.792961894Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 15 23:26:22 ha-138000 dockerd[1164]: time="2024-08-15T23:26:22.968000771Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 15 23:26:22 ha-138000 dockerd[1164]: time="2024-08-15T23:26:22.968069651Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 15 23:26:22 ha-138000 dockerd[1164]: time="2024-08-15T23:26:22.968081146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:26:22 ha-138000 dockerd[1164]: time="2024-08-15T23:26:22.968181768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	065b34908ec98       045733566833c                                                                                         9 seconds ago        Running             kube-controller-manager   3                   4650262cc9c5d       kube-controller-manager-ha-138000
	b9045283b928d       604f5db92eaa8                                                                                         53 seconds ago       Exited              kube-apiserver            2                   7152268f8eec4       kube-apiserver-ha-138000
	4745e33319a09       045733566833c                                                                                         About a minute ago   Exited              kube-controller-manager   2                   4650262cc9c5d       kube-controller-manager-ha-138000
	efbc09be8eda5       38af8ddebf499                                                                                         About a minute ago   Running             kube-vip                  0                   0c665afd15e6f       kube-vip-ha-138000
	589038a9e36bd       2e96e5913fc06                                                                                         About a minute ago   Running             etcd                      1                   ec285d4826baa       etcd-ha-138000
	ac6935271595c       1766f54c897f0                                                                                         About a minute ago   Running             kube-scheduler            1                   07c1c62e41d3a       kube-scheduler-ha-138000
	8f20284cd3969       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   4 minutes ago        Exited              busybox                   0                   bfc975a528b9e       busybox-7dff88458-wgww9
	42f5d82b00417       cbb01a7bd410d                                                                                         6 minutes ago        Exited              coredns                   0                   10891f8fbffcc       coredns-6f6b679f8f-dmgt5
	3e8b806ef4f33       cbb01a7bd410d                                                                                         6 minutes ago        Exited              coredns                   0                   096ab15603b01       coredns-6f6b679f8f-zc8jj
	6a1122913bb18       6e38f40d628db                                                                                         6 minutes ago        Exited              storage-provisioner       0                   e30dde4a5a10d       storage-provisioner
	c2a16126718b3       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              6 minutes ago        Exited              kindnet-cni               0                   e260a94a203af       kindnet-77dc6
	fc2e141007efb       ad83b2ca7b09e                                                                                         6 minutes ago        Exited              kube-proxy                0                   5b40cdd6b2c24       kube-proxy-cznkn
	e919017e14bb9       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago        Exited              kube-vip                  0                   db3c88138b89a       kube-vip-ha-138000
	7c25fb975759b       1766f54c897f0                                                                                         7 minutes ago        Exited              kube-scheduler            0                   edd6b77fdd102       kube-scheduler-ha-138000
	0cde5d8b93f58       2e96e5913fc06                                                                                         7 minutes ago        Exited              etcd                      0                   d0d07c194103e       etcd-ha-138000
	
	
	==> coredns [3e8b806ef4f3] <==
	[INFO] 10.244.2.2:44773 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000075522s
	[INFO] 10.244.2.2:53805 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000098349s
	[INFO] 10.244.2.2:34369 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000122495s
	[INFO] 10.244.0.4:59671 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000077646s
	[INFO] 10.244.0.4:41185 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000079139s
	[INFO] 10.244.0.4:42405 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000092065s
	[INFO] 10.244.0.4:54373 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000049998s
	[INFO] 10.244.0.4:57169 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000050383s
	[INFO] 10.244.0.4:37825 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000085108s
	[INFO] 10.244.1.2:59685 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000072268s
	[INFO] 10.244.1.2:32923 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073054s
	[INFO] 10.244.2.2:50876 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000068102s
	[INFO] 10.244.2.2:54719 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000762s
	[INFO] 10.244.0.4:57395 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000091608s
	[INFO] 10.244.0.4:37936 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000031052s
	[INFO] 10.244.1.2:58408 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000088888s
	[INFO] 10.244.1.2:42731 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000114857s
	[INFO] 10.244.1.2:41638 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000082664s
	[INFO] 10.244.2.2:52666 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000092331s
	[INFO] 10.244.2.2:41501 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000093116s
	[INFO] 10.244.0.4:48200 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000075447s
	[INFO] 10.244.0.4:35056 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000091854s
	[INFO] 10.244.0.4:36257 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000057922s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [42f5d82b0041] <==
	[INFO] 10.244.1.2:50104 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.009876264s
	[INFO] 10.244.0.4:33653 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115506s
	[INFO] 10.244.0.4:45180 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000042438s
	[INFO] 10.244.1.2:60312 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000068925s
	[INFO] 10.244.1.2:38521 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000124425s
	[INFO] 10.244.1.2:51675 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000125646s
	[INFO] 10.244.1.2:33974 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000078827s
	[INFO] 10.244.2.2:38966 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000078816s
	[INFO] 10.244.2.2:56056 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000620092s
	[INFO] 10.244.2.2:32787 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000109221s
	[INFO] 10.244.2.2:55701 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000039601s
	[INFO] 10.244.0.4:52543 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000083971s
	[INFO] 10.244.0.4:55050 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000146353s
	[INFO] 10.244.1.2:52165 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100415s
	[INFO] 10.244.1.2:41123 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000060755s
	[INFO] 10.244.2.2:56460 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000087503s
	[INFO] 10.244.2.2:36407 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00009778s
	[INFO] 10.244.0.4:40764 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000037536s
	[INFO] 10.244.0.4:58473 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000029335s
	[INFO] 10.244.1.2:38640 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000118481s
	[INFO] 10.244.2.2:46151 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000117088s
	[INFO] 10.244.2.2:34054 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000108858s
	[INFO] 10.244.0.4:56735 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000069666s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0815 23:26:31.456875    2650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0815 23:26:31.458210    2650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0815 23:26:31.459718    2650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0815 23:26:31.461262    2650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0815 23:26:31.462896    2650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.035894] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.007974] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.668520] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.007326] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.770013] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +1.360688] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000015] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +1.372641] systemd-fstab-generator[478]: Ignoring "noauto" option for root device
	[  +0.100663] systemd-fstab-generator[490]: Ignoring "noauto" option for root device
	[  +1.991571] systemd-fstab-generator[1087]: Ignoring "noauto" option for root device
	[  +0.238035] systemd-fstab-generator[1123]: Ignoring "noauto" option for root device
	[  +0.057754] kauditd_printk_skb: 101 callbacks suppressed
	[  +0.054898] systemd-fstab-generator[1135]: Ignoring "noauto" option for root device
	[  +0.128367] systemd-fstab-generator[1149]: Ignoring "noauto" option for root device
	[  +2.481651] systemd-fstab-generator[1367]: Ignoring "noauto" option for root device
	[  +0.106020] systemd-fstab-generator[1379]: Ignoring "noauto" option for root device
	[  +0.099803] systemd-fstab-generator[1391]: Ignoring "noauto" option for root device
	[  +0.128219] systemd-fstab-generator[1406]: Ignoring "noauto" option for root device
	[  +0.443324] systemd-fstab-generator[1569]: Ignoring "noauto" option for root device
	[  +6.920369] kauditd_printk_skb: 212 callbacks suppressed
	[Aug15 23:25] kauditd_printk_skb: 40 callbacks suppressed
	
	
	==> etcd [0cde5d8b93f5] <==
	2024/08/15 23:24:23 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-15T23:24:23.473964Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"645.370724ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllerrevisions/\" range_end:\"/registry/controllerrevisions0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-08-15T23:24:23.473975Z","caller":"traceutil/trace.go:171","msg":"trace[1846477602] range","detail":"{range_begin:/registry/controllerrevisions/; range_end:/registry/controllerrevisions0; }","duration":"645.382858ms","start":"2024-08-15T23:24:22.828589Z","end":"2024-08-15T23:24:23.473972Z","steps":["trace[1846477602] 'agreement among raft nodes before linearized reading'  (duration: 645.370611ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T23:24:23.473985Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-15T23:24:22.828583Z","time spent":"645.398405ms","remote":"127.0.0.1:48088","response type":"/etcdserverpb.KV/Range","request count":0,"request size":66,"response count":0,"response size":0,"request content":"key:\"/registry/controllerrevisions/\" range_end:\"/registry/controllerrevisions0\" count_only:true "}
	2024/08/15 23:24:23 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-15T23:24:23.523484Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-15T23:24:23.523513Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-15T23:24:23.523614Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"b8c6c7563d17d844","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-15T23:24:23.526296Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"2399e7dba5b18dfe"}
	{"level":"info","ts":"2024-08-15T23:24:23.526315Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"2399e7dba5b18dfe"}
	{"level":"info","ts":"2024-08-15T23:24:23.526356Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"2399e7dba5b18dfe"}
	{"level":"info","ts":"2024-08-15T23:24:23.526410Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"2399e7dba5b18dfe"}
	{"level":"info","ts":"2024-08-15T23:24:23.526458Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"2399e7dba5b18dfe"}
	{"level":"info","ts":"2024-08-15T23:24:23.526483Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"2399e7dba5b18dfe"}
	{"level":"info","ts":"2024-08-15T23:24:23.526491Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"2399e7dba5b18dfe"}
	{"level":"info","ts":"2024-08-15T23:24:23.526495Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"c8daa22dc1df7d56"}
	{"level":"info","ts":"2024-08-15T23:24:23.526502Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"c8daa22dc1df7d56"}
	{"level":"info","ts":"2024-08-15T23:24:23.526513Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"c8daa22dc1df7d56"}
	{"level":"info","ts":"2024-08-15T23:24:23.526797Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c8daa22dc1df7d56"}
	{"level":"info","ts":"2024-08-15T23:24:23.526821Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c8daa22dc1df7d56"}
	{"level":"info","ts":"2024-08-15T23:24:23.526843Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c8daa22dc1df7d56"}
	{"level":"info","ts":"2024-08-15T23:24:23.526851Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"c8daa22dc1df7d56"}
	{"level":"info","ts":"2024-08-15T23:24:23.528360Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-08-15T23:24:23.528429Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-08-15T23:24:23.528442Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-138000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.5:2380"],"advertise-client-urls":["https://192.169.0.5:2379"]}
	
	
	==> etcd [589038a9e36b] <==
	{"level":"warn","ts":"2024-08-15T23:26:28.253303Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740419292119061,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-08-15T23:26:28.354120Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-15T23:26:28.354428Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-15T23:26:28.354550Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-08-15T23:26:28.354839Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1864] sent MsgPreVote request to 2399e7dba5b18dfe at term 2"}
	{"level":"info","ts":"2024-08-15T23:26:28.355083Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1864] sent MsgPreVote request to c8daa22dc1df7d56 at term 2"}
	{"level":"warn","ts":"2024-08-15T23:26:28.754148Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740419292119061,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-15T23:26:29.256225Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740419292119061,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-15T23:26:29.756893Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740419292119061,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-08-15T23:26:29.953226Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-15T23:26:29.953309Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-15T23:26:29.953328Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-08-15T23:26:29.953346Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1864] sent MsgPreVote request to 2399e7dba5b18dfe at term 2"}
	{"level":"info","ts":"2024-08-15T23:26:29.953357Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1864] sent MsgPreVote request to c8daa22dc1df7d56 at term 2"}
	{"level":"warn","ts":"2024-08-15T23:26:30.257266Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740419292119061,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-15T23:26:30.757984Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740419292119061,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-15T23:26:31.226168Z","caller":"etcdserver/v3_server.go:932","msg":"timed out waiting for read index response (local node might have slow network)","timeout":"7s"}
	{"level":"warn","ts":"2024-08-15T23:26:31.226254Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"7.00105434s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: request timed out"}
	{"level":"info","ts":"2024-08-15T23:26:31.226285Z","caller":"traceutil/trace.go:171","msg":"trace[1146550468] range","detail":"{range_begin:; range_end:; }","duration":"7.001099689s","start":"2024-08-15T23:26:24.225173Z","end":"2024-08-15T23:26:31.226273Z","steps":["trace[1146550468] 'agreement among raft nodes before linearized reading'  (duration: 7.001052446s)"],"step_count":1}
	{"level":"error","ts":"2024-08-15T23:26:31.226478Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[-]linearizable_read failed: etcdserver: request timed out\n[+]data_corruption ok\n[+]serializable_read ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-08-15T23:26:31.553047Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-15T23:26:31.553082Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-15T23:26:31.553093Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-08-15T23:26:31.553106Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1864] sent MsgPreVote request to 2399e7dba5b18dfe at term 2"}
	{"level":"info","ts":"2024-08-15T23:26:31.553114Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1864] sent MsgPreVote request to c8daa22dc1df7d56 at term 2"}
	
	
	==> kernel <==
	 23:26:31 up 1 min,  0 users,  load average: 0.36, 0.17, 0.06
	Linux ha-138000 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [c2a16126718b] <==
	I0815 23:23:47.704130       1 main.go:322] Node ha-138000-m04 has CIDR [10.244.3.0/24] 
	I0815 23:23:57.712115       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0815 23:23:57.712139       1 main.go:299] handling current node
	I0815 23:23:57.712152       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0815 23:23:57.712157       1 main.go:322] Node ha-138000-m02 has CIDR [10.244.1.0/24] 
	I0815 23:23:57.712420       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0815 23:23:57.712543       1 main.go:322] Node ha-138000-m03 has CIDR [10.244.2.0/24] 
	I0815 23:23:57.712720       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0815 23:23:57.712823       1 main.go:322] Node ha-138000-m04 has CIDR [10.244.3.0/24] 
	I0815 23:24:07.712424       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0815 23:24:07.712474       1 main.go:299] handling current node
	I0815 23:24:07.712488       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0815 23:24:07.712494       1 main.go:322] Node ha-138000-m02 has CIDR [10.244.1.0/24] 
	I0815 23:24:07.712623       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0815 23:24:07.712704       1 main.go:322] Node ha-138000-m03 has CIDR [10.244.2.0/24] 
	I0815 23:24:07.712814       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0815 23:24:07.712851       1 main.go:322] Node ha-138000-m04 has CIDR [10.244.3.0/24] 
	I0815 23:24:17.705680       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0815 23:24:17.705716       1 main.go:322] Node ha-138000-m02 has CIDR [10.244.1.0/24] 
	I0815 23:24:17.706225       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0815 23:24:17.706282       1 main.go:322] Node ha-138000-m03 has CIDR [10.244.2.0/24] 
	I0815 23:24:17.706514       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0815 23:24:17.706582       1 main.go:322] Node ha-138000-m04 has CIDR [10.244.3.0/24] 
	I0815 23:24:17.706957       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0815 23:24:17.707108       1 main.go:299] handling current node
	
	
	==> kube-apiserver [b9045283b928] <==
	I0815 23:25:39.043226       1 options.go:228] external host was not specified, using 192.169.0.5
	I0815 23:25:39.044546       1 server.go:142] Version: v1.31.0
	I0815 23:25:39.044663       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 23:25:39.768547       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0815 23:25:39.772537       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0815 23:25:39.775125       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0815 23:25:39.775138       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0815 23:25:39.775319       1 instance.go:232] Using reconciler: lease
	W0815 23:25:59.767712       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0815 23:25:59.768676       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0815 23:25:59.776247       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0815 23:25:59.776277       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [065b34908ec9] <==
	I0815 23:26:23.285232       1 serving.go:386] Generated self-signed cert in-memory
	I0815 23:26:23.782222       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0815 23:26:23.782291       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 23:26:23.783394       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0815 23:26:23.783594       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0815 23:26:23.783712       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0815 23:26:23.783867       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	
	
	==> kube-controller-manager [4745e33319a0] <==
	I0815 23:25:32.517286       1 serving.go:386] Generated self-signed cert in-memory
	I0815 23:25:32.734840       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0815 23:25:32.734872       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 23:25:32.736705       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0815 23:25:32.736731       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0815 23:25:32.736740       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0815 23:25:32.736747       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0815 23:25:59.739804       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.169.0.5:8443/healthz\": net/http: TLS handshake timeout"
	
	
	==> kube-proxy [fc2e141007ef] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 23:19:33.922056       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0815 23:19:33.939645       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0815 23:19:33.939881       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 23:19:33.966815       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0815 23:19:33.966963       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0815 23:19:33.967061       1 server_linux.go:169] "Using iptables Proxier"
	I0815 23:19:33.969119       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 23:19:33.969437       1 server.go:483] "Version info" version="v1.31.0"
	I0815 23:19:33.969466       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 23:19:33.970289       1 config.go:197] "Starting service config controller"
	I0815 23:19:33.970403       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 23:19:33.970441       1 config.go:104] "Starting endpoint slice config controller"
	I0815 23:19:33.970446       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 23:19:33.970870       1 config.go:326] "Starting node config controller"
	I0815 23:19:33.970895       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 23:19:34.070930       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0815 23:19:34.070930       1 shared_informer.go:320] Caches are synced for node config
	I0815 23:19:34.070944       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [7c25fb975759] <==
	E0815 23:19:26.587225       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0815 23:19:27.147361       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0815 23:22:08.672878       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-t5sdh\": pod busybox-7dff88458-t5sdh is already assigned to node \"ha-138000-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-t5sdh" node="ha-138000-m03"
	E0815 23:22:08.672963       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod b81fe134-5ef5-4074-920a-105e4bd801be(default/busybox-7dff88458-t5sdh) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-t5sdh"
	E0815 23:22:08.672983       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-t5sdh\": pod busybox-7dff88458-t5sdh is already assigned to node \"ha-138000-m03\"" pod="default/busybox-7dff88458-t5sdh"
	I0815 23:22:08.673000       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-t5sdh" node="ha-138000-m03"
	E0815 23:22:08.673278       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-wgww9\": pod busybox-7dff88458-wgww9 is already assigned to node \"ha-138000\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-wgww9" node="ha-138000"
	E0815 23:22:08.673460       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod b8eb799e-e761-4647-8aae-388c38bc936e(default/busybox-7dff88458-wgww9) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-wgww9"
	E0815 23:22:08.673519       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-wgww9\": pod busybox-7dff88458-wgww9 is already assigned to node \"ha-138000\"" pod="default/busybox-7dff88458-wgww9"
	I0815 23:22:08.673609       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-wgww9" node="ha-138000"
	E0815 23:22:36.177995       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-qpth7\": pod kube-proxy-qpth7 is already assigned to node \"ha-138000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-qpth7" node="ha-138000-m04"
	E0815 23:22:36.178149       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod a343f80b-0fe9-4c88-9782-5fbf9a6170d1(kube-system/kube-proxy-qpth7) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-qpth7"
	E0815 23:22:36.178181       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-qpth7\": pod kube-proxy-qpth7 is already assigned to node \"ha-138000-m04\"" pod="kube-system/kube-proxy-qpth7"
	I0815 23:22:36.178207       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-qpth7" node="ha-138000-m04"
	E0815 23:22:36.181318       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-m887r\": pod kindnet-m887r is already assigned to node \"ha-138000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-m887r" node="ha-138000-m04"
	E0815 23:22:36.181425       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod ba31865b-c712-47a8-9fd8-06420270ac8b(kube-system/kindnet-m887r) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-m887r"
	E0815 23:22:36.181440       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-m887r\": pod kindnet-m887r is already assigned to node \"ha-138000-m04\"" pod="kube-system/kindnet-m887r"
	I0815 23:22:36.181451       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-m887r" node="ha-138000-m04"
	E0815 23:22:36.197728       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-xc8mj\": pod kube-proxy-xc8mj is already assigned to node \"ha-138000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-xc8mj" node="ha-138000-m04"
	E0815 23:22:36.197783       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 661886ed-7ec0-401d-b893-4dd74852e477(kube-system/kube-proxy-xc8mj) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-xc8mj"
	E0815 23:22:36.197797       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-xc8mj\": pod kube-proxy-xc8mj is already assigned to node \"ha-138000-m04\"" pod="kube-system/kube-proxy-xc8mj"
	I0815 23:22:36.197815       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-xc8mj" node="ha-138000-m04"
	I0815 23:24:23.554288       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0815 23:24:23.554620       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0815 23:24:23.554869       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [ac6935271595] <==
	E0815 23:26:00.784286       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:57392->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0815 23:26:00.783926       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.169.0.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:57406->192.169.0.5:8443: read: connection reset by peer
	E0815 23:26:00.785121       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.169.0.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:57406->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0815 23:26:01.609073       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.169.0.5:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0815 23:26:01.609171       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.169.0.5:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0815 23:26:01.749747       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0815 23:26:01.749877       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0815 23:26:01.819482       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.169.0.5:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0815 23:26:01.819537       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.169.0.5:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0815 23:26:01.965789       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0815 23:26:01.965841       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0815 23:26:01.977376       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0815 23:26:01.977429       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0815 23:26:03.356310       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0815 23:26:03.356405       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0815 23:26:27.368092       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.169.0.5:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0815 23:26:27.368199       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.169.0.5:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0815 23:26:28.485138       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0815 23:26:28.485309       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0815 23:26:28.517855       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0815 23:26:28.518254       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0815 23:26:30.367151       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0815 23:26:30.367350       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0815 23:26:32.176181       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.169.0.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0815 23:26:32.176274       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.169.0.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	
	
	==> kubelet <==
	Aug 15 23:26:06 ha-138000 kubelet[1576]: E0815 23:26:06.653499    1576 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-138000?timeout=10s\": dial tcp 192.169.0.254:8443: connect: no route to host" interval="7s"
	Aug 15 23:26:06 ha-138000 kubelet[1576]: E0815 23:26:06.653301    1576 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.169.0.254:8443: connect: no route to host" node="ha-138000"
	Aug 15 23:26:09 ha-138000 kubelet[1576]: I0815 23:26:09.765385    1576 scope.go:117] "RemoveContainer" containerID="4745e33319a09d9bd5f6d9af75281ffc2c6681d2c3666cbb617dc23792e131ae"
	Aug 15 23:26:09 ha-138000 kubelet[1576]: E0815 23:26:09.765631    1576 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-138000_kube-system(ed196a03081880609aebd781f662c0b9)\"" pod="kube-system/kube-controller-manager-ha-138000" podUID="ed196a03081880609aebd781f662c0b9"
	Aug 15 23:26:09 ha-138000 kubelet[1576]: E0815 23:26:09.930679    1576 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-138000\" not found"
	Aug 15 23:26:13 ha-138000 kubelet[1576]: I0815 23:26:13.656566    1576 kubelet_node_status.go:72] "Attempting to register node" node="ha-138000"
	Aug 15 23:26:15 ha-138000 kubelet[1576]: E0815 23:26:15.871758    1576 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.169.0.254:8443: connect: no route to host" node="ha-138000"
	Aug 15 23:26:15 ha-138000 kubelet[1576]: E0815 23:26:15.871942    1576 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-138000?timeout=10s\": dial tcp 192.169.0.254:8443: connect: no route to host" interval="7s"
	Aug 15 23:26:15 ha-138000 kubelet[1576]: E0815 23:26:15.872282    1576 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.169.0.254:8443: connect: no route to host" event="&Event{ObjectMeta:{ha-138000.17ec0a7d1e5ef862  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-138000,UID:ha-138000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ha-138000,},FirstTimestamp:2024-08-15 23:24:49.872787554 +0000 UTC m=+0.202964169,LastTimestamp:2024-08-15 23:24:49.872787554 +0000 UTC m=+0.202964169,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-138000,}"
	Aug 15 23:26:18 ha-138000 kubelet[1576]: I0815 23:26:18.923144    1576 scope.go:117] "RemoveContainer" containerID="b9045283b928de777b7f4d15d6de8027f3786562c871977d957e2dd9d2a95b29"
	Aug 15 23:26:18 ha-138000 kubelet[1576]: E0815 23:26:18.923330    1576 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-138000_kube-system(8df20a622868f60a60f4423e49478fa2)\"" pod="kube-system/kube-apiserver-ha-138000" podUID="8df20a622868f60a60f4423e49478fa2"
	Aug 15 23:26:19 ha-138000 kubelet[1576]: E0815 23:26:19.931510    1576 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-138000\" not found"
	Aug 15 23:26:22 ha-138000 kubelet[1576]: W0815 23:26:22.016148    1576 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.169.0.254:8443: connect: no route to host
	Aug 15 23:26:22 ha-138000 kubelet[1576]: E0815 23:26:22.016798    1576 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	Aug 15 23:26:22 ha-138000 kubelet[1576]: W0815 23:26:22.016287    1576 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.169.0.254:8443: connect: no route to host
	Aug 15 23:26:22 ha-138000 kubelet[1576]: E0815 23:26:22.017033    1576 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	Aug 15 23:26:22 ha-138000 kubelet[1576]: I0815 23:26:22.873973    1576 kubelet_node_status.go:72] "Attempting to register node" node="ha-138000"
	Aug 15 23:26:22 ha-138000 kubelet[1576]: I0815 23:26:22.922717    1576 scope.go:117] "RemoveContainer" containerID="4745e33319a09d9bd5f6d9af75281ffc2c6681d2c3666cbb617dc23792e131ae"
	Aug 15 23:26:25 ha-138000 kubelet[1576]: W0815 23:26:25.085523    1576 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.169.0.254:8443: connect: no route to host
	Aug 15 23:26:25 ha-138000 kubelet[1576]: E0815 23:26:25.086185    1576 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	Aug 15 23:26:25 ha-138000 kubelet[1576]: E0815 23:26:25.085810    1576 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.169.0.254:8443: connect: no route to host" node="ha-138000"
	Aug 15 23:26:25 ha-138000 kubelet[1576]: E0815 23:26:25.085733    1576 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-138000?timeout=10s\": dial tcp 192.169.0.254:8443: connect: no route to host" interval="7s"
	Aug 15 23:26:28 ha-138000 kubelet[1576]: E0815 23:26:28.161180    1576 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.169.0.254:8443: connect: no route to host" event="&Event{ObjectMeta:{ha-138000.17ec0a7d1e5ef862  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-138000,UID:ha-138000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ha-138000,},FirstTimestamp:2024-08-15 23:24:49.872787554 +0000 UTC m=+0.202964169,LastTimestamp:2024-08-15 23:24:49.872787554 +0000 UTC m=+0.202964169,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-138000,}"
	Aug 15 23:26:29 ha-138000 kubelet[1576]: E0815 23:26:29.932625    1576 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-138000\" not found"
	Aug 15 23:26:32 ha-138000 kubelet[1576]: I0815 23:26:32.088314    1576 kubelet_node_status.go:72] "Attempting to register node" node="ha-138000"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-138000 -n ha-138000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-138000 -n ha-138000: exit status 2 (171.168974ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-138000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (148.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (34.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-138000 node delete m03 -v=7 --alsologtostderr: exit status 83 (174.07074ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-138000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-138000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 16:26:32.776121    3722 out.go:345] Setting OutFile to fd 1 ...
	I0815 16:26:32.776396    3722 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:26:32.776402    3722 out.go:358] Setting ErrFile to fd 2...
	I0815 16:26:32.776405    3722 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:26:32.776586    3722 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-977/.minikube/bin
	I0815 16:26:32.776902    3722 mustload.go:65] Loading cluster: ha-138000
	I0815 16:26:32.777210    3722 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:26:32.777578    3722 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:26:32.777625    3722 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:26:32.786139    3722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52091
	I0815 16:26:32.786531    3722 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:26:32.786950    3722 main.go:141] libmachine: Using API Version  1
	I0815 16:26:32.786980    3722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:26:32.787184    3722 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:26:32.787370    3722 main.go:141] libmachine: (ha-138000) Calling .GetState
	I0815 16:26:32.787480    3722 main.go:141] libmachine: (ha-138000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:26:32.787554    3722 main.go:141] libmachine: (ha-138000) DBG | hyperkit pid from json: 3662
	I0815 16:26:32.788524    3722 host.go:66] Checking if "ha-138000" exists ...
	I0815 16:26:32.788775    3722 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:26:32.788803    3722 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:26:32.797395    3722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52093
	I0815 16:26:32.797734    3722 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:26:32.798136    3722 main.go:141] libmachine: Using API Version  1
	I0815 16:26:32.798157    3722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:26:32.798390    3722 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:26:32.798512    3722 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:26:32.798886    3722 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:26:32.798917    3722 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:26:32.807535    3722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52095
	I0815 16:26:32.807857    3722 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:26:32.808212    3722 main.go:141] libmachine: Using API Version  1
	I0815 16:26:32.808247    3722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:26:32.808481    3722 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:26:32.808591    3722 main.go:141] libmachine: (ha-138000-m02) Calling .GetState
	I0815 16:26:32.808678    3722 main.go:141] libmachine: (ha-138000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:26:32.808762    3722 main.go:141] libmachine: (ha-138000-m02) DBG | hyperkit pid from json: 3670
	I0815 16:26:32.809703    3722 host.go:66] Checking if "ha-138000-m02" exists ...
	I0815 16:26:32.809959    3722 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:26:32.809984    3722 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:26:32.818501    3722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52097
	I0815 16:26:32.818850    3722 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:26:32.819171    3722 main.go:141] libmachine: Using API Version  1
	I0815 16:26:32.819181    3722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:26:32.819406    3722 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:26:32.819519    3722 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:26:32.819890    3722 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:26:32.819918    3722 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:26:32.828266    3722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52099
	I0815 16:26:32.828604    3722 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:26:32.828962    3722 main.go:141] libmachine: Using API Version  1
	I0815 16:26:32.828976    3722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:26:32.829170    3722 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:26:32.829272    3722 main.go:141] libmachine: (ha-138000-m03) Calling .GetState
	I0815 16:26:32.829355    3722 main.go:141] libmachine: (ha-138000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:26:32.829443    3722 main.go:141] libmachine: (ha-138000-m03) DBG | hyperkit pid from json: 3119
	I0815 16:26:32.830392    3722 main.go:141] libmachine: (ha-138000-m03) DBG | hyperkit pid 3119 missing from process table
	I0815 16:26:32.852048    3722 out.go:177] * The control-plane node ha-138000-m03 host is not running: state=Stopped
	I0815 16:26:32.872689    3722 out.go:177]   To start a cluster, run: "minikube start -p ha-138000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-amd64 -p ha-138000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-138000 status -v=7 --alsologtostderr: exit status 7 (13.978663041s)

                                                
                                                
-- stdout --
	ha-138000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-138000-m02
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-138000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-138000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 16:26:32.951927    3729 out.go:345] Setting OutFile to fd 1 ...
	I0815 16:26:32.952126    3729 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:26:32.952132    3729 out.go:358] Setting ErrFile to fd 2...
	I0815 16:26:32.952136    3729 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:26:32.952315    3729 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-977/.minikube/bin
	I0815 16:26:32.952497    3729 out.go:352] Setting JSON to false
	I0815 16:26:32.952518    3729 mustload.go:65] Loading cluster: ha-138000
	I0815 16:26:32.952558    3729 notify.go:220] Checking for updates...
	I0815 16:26:32.952843    3729 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:26:32.952859    3729 status.go:255] checking status of ha-138000 ...
	I0815 16:26:32.953229    3729 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:26:32.953313    3729 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:26:32.962326    3729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52102
	I0815 16:26:32.962741    3729 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:26:32.963189    3729 main.go:141] libmachine: Using API Version  1
	I0815 16:26:32.963219    3729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:26:32.963430    3729 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:26:32.963556    3729 main.go:141] libmachine: (ha-138000) Calling .GetState
	I0815 16:26:32.963652    3729 main.go:141] libmachine: (ha-138000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:26:32.963720    3729 main.go:141] libmachine: (ha-138000) DBG | hyperkit pid from json: 3662
	I0815 16:26:32.964678    3729 status.go:330] ha-138000 host status = "Running" (err=<nil>)
	I0815 16:26:32.964699    3729 host.go:66] Checking if "ha-138000" exists ...
	I0815 16:26:32.964928    3729 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:26:32.964948    3729 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:26:32.973426    3729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52104
	I0815 16:26:32.973792    3729 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:26:32.974167    3729 main.go:141] libmachine: Using API Version  1
	I0815 16:26:32.974183    3729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:26:32.974426    3729 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:26:32.974530    3729 main.go:141] libmachine: (ha-138000) Calling .GetIP
	I0815 16:26:32.974617    3729 host.go:66] Checking if "ha-138000" exists ...
	I0815 16:26:32.974871    3729 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:26:32.974898    3729 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:26:32.983431    3729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52106
	I0815 16:26:32.983747    3729 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:26:32.984062    3729 main.go:141] libmachine: Using API Version  1
	I0815 16:26:32.984077    3729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:26:32.984289    3729 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:26:32.984394    3729 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:26:32.984545    3729 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 16:26:32.984567    3729 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:26:32.984641    3729 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:26:32.984716    3729 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:26:32.984795    3729 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:26:32.984870    3729 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/id_rsa Username:docker}
	I0815 16:26:33.021938    3729 ssh_runner.go:195] Run: systemctl --version
	I0815 16:26:33.026309    3729 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 16:26:33.037089    3729 kubeconfig.go:125] found "ha-138000" server: "https://192.169.0.254:8443"
	I0815 16:26:33.037110    3729 api_server.go:166] Checking apiserver status ...
	I0815 16:26:33.037145    3729 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 16:26:33.047431    3729 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2772/cgroup
	W0815 16:26:33.054413    3729 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2772/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0815 16:26:33.054455    3729 ssh_runner.go:195] Run: ls
	I0815 16:26:33.058042    3729 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0815 16:26:38.060343    3729 api_server.go:269] stopped: https://192.169.0.254:8443/healthz: Get "https://192.169.0.254:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 16:26:38.060418    3729 retry.go:31] will retry after 309.532261ms: state is "Stopped"
	I0815 16:26:38.372018    3729 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0815 16:26:38.372480    3729 api_server.go:269] stopped: https://192.169.0.254:8443/healthz: Get "https://192.169.0.254:8443/healthz": dial tcp 192.169.0.254:8443: connect: no route to host
	I0815 16:26:38.372521    3729 retry.go:31] will retry after 294.487702ms: state is "Stopped"
	I0815 16:26:38.667480    3729 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0815 16:26:38.668001    3729 api_server.go:269] stopped: https://192.169.0.254:8443/healthz: Get "https://192.169.0.254:8443/healthz": dial tcp 192.169.0.254:8443: connect: host is down
	I0815 16:26:38.668036    3729 retry.go:31] will retry after 487.768373ms: state is "Stopped"
	I0815 16:26:39.156014    3729 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0815 16:26:39.156421    3729 api_server.go:269] stopped: https://192.169.0.254:8443/healthz: Get "https://192.169.0.254:8443/healthz": dial tcp 192.169.0.254:8443: connect: host is down
	I0815 16:26:39.156453    3729 retry.go:31] will retry after 499.587435ms: state is "Stopped"
	I0815 16:26:39.658132    3729 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0815 16:26:39.658670    3729 api_server.go:269] stopped: https://192.169.0.254:8443/healthz: Get "https://192.169.0.254:8443/healthz": dial tcp 192.169.0.254:8443: connect: host is down
	I0815 16:26:39.658707    3729 retry.go:31] will retry after 744.543917ms: state is "Stopped"
	I0815 16:26:40.405477    3729 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0815 16:26:40.405927    3729 api_server.go:269] stopped: https://192.169.0.254:8443/healthz: Get "https://192.169.0.254:8443/healthz": dial tcp 192.169.0.254:8443: connect: host is down
	I0815 16:26:40.405959    3729 retry.go:31] will retry after 896.07533ms: state is "Stopped"
	I0815 16:26:41.304273    3729 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0815 16:26:41.304709    3729 api_server.go:269] stopped: https://192.169.0.254:8443/healthz: Get "https://192.169.0.254:8443/healthz": dial tcp 192.169.0.254:8443: connect: host is down
	I0815 16:26:41.304740    3729 retry.go:31] will retry after 1.120933631s: state is "Stopped"
	I0815 16:26:42.425994    3729 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0815 16:26:42.426481    3729 api_server.go:269] stopped: https://192.169.0.254:8443/healthz: Get "https://192.169.0.254:8443/healthz": dial tcp 192.169.0.254:8443: connect: host is down
	I0815 16:26:42.426514    3729 retry.go:31] will retry after 1.217240579s: state is "Stopped"
	I0815 16:26:43.644662    3729 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0815 16:26:43.645105    3729 api_server.go:269] stopped: https://192.169.0.254:8443/healthz: Get "https://192.169.0.254:8443/healthz": dial tcp 192.169.0.254:8443: connect: host is down
	I0815 16:26:43.645139    3729 retry.go:31] will retry after 1.62058854s: state is "Stopped"
	I0815 16:26:45.266165    3729 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0815 16:26:45.266643    3729 api_server.go:269] stopped: https://192.169.0.254:8443/healthz: Get "https://192.169.0.254:8443/healthz": dial tcp 192.169.0.254:8443: connect: host is down
	I0815 16:26:45.266675    3729 retry.go:31] will retry after 1.493243089s: state is "Stopped"
	I0815 16:26:46.761098    3729 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0815 16:26:46.761511    3729 api_server.go:269] stopped: https://192.169.0.254:8443/healthz: Get "https://192.169.0.254:8443/healthz": dial tcp 192.169.0.254:8443: connect: host is down
	I0815 16:26:46.761569    3729 status.go:422] ha-138000 apiserver status = Running (err=<nil>)
	I0815 16:26:46.761586    3729 status.go:257] ha-138000 status: &{Name:ha-138000 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 16:26:46.761611    3729 status.go:255] checking status of ha-138000-m02 ...
	I0815 16:26:46.762094    3729 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:26:46.762133    3729 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:26:46.771782    3729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52120
	I0815 16:26:46.772165    3729 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:26:46.772538    3729 main.go:141] libmachine: Using API Version  1
	I0815 16:26:46.772548    3729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:26:46.772916    3729 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:26:46.773116    3729 main.go:141] libmachine: (ha-138000-m02) Calling .GetState
	I0815 16:26:46.773202    3729 main.go:141] libmachine: (ha-138000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:26:46.773272    3729 main.go:141] libmachine: (ha-138000-m02) DBG | hyperkit pid from json: 3670
	I0815 16:26:46.774180    3729 status.go:330] ha-138000-m02 host status = "Running" (err=<nil>)
	I0815 16:26:46.774191    3729 host.go:66] Checking if "ha-138000-m02" exists ...
	I0815 16:26:46.774431    3729 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:26:46.774509    3729 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:26:46.783378    3729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52122
	I0815 16:26:46.783745    3729 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:26:46.784196    3729 main.go:141] libmachine: Using API Version  1
	I0815 16:26:46.784204    3729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:26:46.784434    3729 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:26:46.784584    3729 main.go:141] libmachine: (ha-138000-m02) Calling .GetIP
	I0815 16:26:46.784695    3729 host.go:66] Checking if "ha-138000-m02" exists ...
	I0815 16:26:46.784951    3729 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:26:46.784972    3729 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:26:46.793490    3729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52124
	I0815 16:26:46.793862    3729 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:26:46.794250    3729 main.go:141] libmachine: Using API Version  1
	I0815 16:26:46.794259    3729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:26:46.794498    3729 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:26:46.794653    3729 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:26:46.794769    3729 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 16:26:46.794791    3729 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:26:46.794874    3729 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:26:46.794976    3729 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:26:46.795050    3729 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:26:46.795134    3729 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/id_rsa Username:docker}
	I0815 16:26:46.829640    3729 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 16:26:46.840323    3729 kubeconfig.go:125] found "ha-138000" server: "https://192.169.0.254:8443"
	I0815 16:26:46.840336    3729 api_server.go:166] Checking apiserver status ...
	I0815 16:26:46.840373    3729 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0815 16:26:46.850199    3729 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0815 16:26:46.850209    3729 status.go:422] ha-138000-m02 apiserver status = Stopped (err=<nil>)
	I0815 16:26:46.850219    3729 status.go:257] ha-138000-m02 status: &{Name:ha-138000-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 16:26:46.850229    3729 status.go:255] checking status of ha-138000-m03 ...
	I0815 16:26:46.850486    3729 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:26:46.850517    3729 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:26:46.859072    3729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52127
	I0815 16:26:46.859406    3729 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:26:46.859779    3729 main.go:141] libmachine: Using API Version  1
	I0815 16:26:46.859795    3729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:26:46.860003    3729 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:26:46.860113    3729 main.go:141] libmachine: (ha-138000-m03) Calling .GetState
	I0815 16:26:46.860196    3729 main.go:141] libmachine: (ha-138000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:26:46.860272    3729 main.go:141] libmachine: (ha-138000-m03) DBG | hyperkit pid from json: 3119
	I0815 16:26:46.861176    3729 main.go:141] libmachine: (ha-138000-m03) DBG | hyperkit pid 3119 missing from process table
	I0815 16:26:46.861217    3729 status.go:330] ha-138000-m03 host status = "Stopped" (err=<nil>)
	I0815 16:26:46.861228    3729 status.go:343] host is not running, skipping remaining checks
	I0815 16:26:46.861235    3729 status.go:257] ha-138000-m03 status: &{Name:ha-138000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 16:26:46.861248    3729 status.go:255] checking status of ha-138000-m04 ...
	I0815 16:26:46.861515    3729 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:26:46.861537    3729 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:26:46.869965    3729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52129
	I0815 16:26:46.870301    3729 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:26:46.870642    3729 main.go:141] libmachine: Using API Version  1
	I0815 16:26:46.870665    3729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:26:46.870896    3729 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:26:46.871020    3729 main.go:141] libmachine: (ha-138000-m04) Calling .GetState
	I0815 16:26:46.871112    3729 main.go:141] libmachine: (ha-138000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:26:46.871197    3729 main.go:141] libmachine: (ha-138000-m04) DBG | hyperkit pid from json: 3240
	I0815 16:26:46.872108    3729 main.go:141] libmachine: (ha-138000-m04) DBG | hyperkit pid 3240 missing from process table
	I0815 16:26:46.872131    3729 status.go:330] ha-138000-m04 host status = "Stopped" (err=<nil>)
	I0815 16:26:46.872135    3729 status.go:343] host is not running, skipping remaining checks
	I0815 16:26:46.872143    3729 status.go:257] ha-138000-m04 status: &{Name:ha-138000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-amd64 -p ha-138000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-138000 -n ha-138000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-138000 -n ha-138000: exit status 2 (17.98241752s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-138000 logs -n 25: (2.134915104s)
helpers_test.go:252: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-138000 ssh -n                                                                                                             | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n ha-138000-m02 sudo cat                                                                                      | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | /home/docker/cp-test_ha-138000-m03_ha-138000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-138000 cp ha-138000-m03:/home/docker/cp-test.txt                                                                          | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m04:/home/docker/cp-test_ha-138000-m03_ha-138000-m04.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n                                                                                                             | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n ha-138000-m04 sudo cat                                                                                      | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | /home/docker/cp-test_ha-138000-m03_ha-138000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-138000 cp testdata/cp-test.txt                                                                                            | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n                                                                                                             | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-138000 cp ha-138000-m04:/home/docker/cp-test.txt                                                                          | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile1119105544/001/cp-test_ha-138000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n                                                                                                             | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-138000 cp ha-138000-m04:/home/docker/cp-test.txt                                                                          | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000:/home/docker/cp-test_ha-138000-m04_ha-138000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n                                                                                                             | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n ha-138000 sudo cat                                                                                          | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | /home/docker/cp-test_ha-138000-m04_ha-138000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-138000 cp ha-138000-m04:/home/docker/cp-test.txt                                                                          | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m02:/home/docker/cp-test_ha-138000-m04_ha-138000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n                                                                                                             | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n ha-138000-m02 sudo cat                                                                                      | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | /home/docker/cp-test_ha-138000-m04_ha-138000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-138000 cp ha-138000-m04:/home/docker/cp-test.txt                                                                          | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m03:/home/docker/cp-test_ha-138000-m04_ha-138000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n                                                                                                             | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n ha-138000-m03 sudo cat                                                                                      | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | /home/docker/cp-test_ha-138000-m04_ha-138000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-138000 node stop m02 -v=7                                                                                                 | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-138000 node start m02 -v=7                                                                                                | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:24 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-138000 -v=7                                                                                                       | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:24 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | -p ha-138000 -v=7                                                                                                            | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:24 PDT | 15 Aug 24 16:24 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-138000 --wait=true -v=7                                                                                                | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:24 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-138000                                                                                                            | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:26 PDT |                     |
	| node    | ha-138000 node delete m03 -v=7                                                                                               | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:26 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 16:24:31
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 16:24:31.233096    3649 out.go:345] Setting OutFile to fd 1 ...
	I0815 16:24:31.233281    3649 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:24:31.233287    3649 out.go:358] Setting ErrFile to fd 2...
	I0815 16:24:31.233290    3649 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:24:31.233463    3649 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-977/.minikube/bin
	I0815 16:24:31.234892    3649 out.go:352] Setting JSON to false
	I0815 16:24:31.259609    3649 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":1442,"bootTime":1723762829,"procs":422,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0815 16:24:31.259835    3649 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 16:24:31.281220    3649 out.go:177] * [ha-138000] minikube v1.33.1 on Darwin 14.6.1
	I0815 16:24:31.323339    3649 out.go:177]   - MINIKUBE_LOCATION=19452
	I0815 16:24:31.323394    3649 notify.go:220] Checking for updates...
	I0815 16:24:31.366134    3649 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19452-977/kubeconfig
	I0815 16:24:31.387302    3649 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0815 16:24:31.408076    3649 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 16:24:31.429265    3649 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-977/.minikube
	I0815 16:24:31.450282    3649 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 16:24:31.472864    3649 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:24:31.473038    3649 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 16:24:31.473723    3649 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:24:31.473802    3649 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:24:31.483475    3649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52024
	I0815 16:24:31.483866    3649 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:24:31.484264    3649 main.go:141] libmachine: Using API Version  1
	I0815 16:24:31.484274    3649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:24:31.484483    3649 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:24:31.484590    3649 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:24:31.513331    3649 out.go:177] * Using the hyperkit driver based on existing profile
	I0815 16:24:31.555013    3649 start.go:297] selected driver: hyperkit
	I0815 16:24:31.555040    3649 start.go:901] validating driver "hyperkit" against &{Name:ha-138000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-138000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:fals
e efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 16:24:31.555294    3649 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 16:24:31.555482    3649 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 16:24:31.555679    3649 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19452-977/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0815 16:24:31.565322    3649 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0815 16:24:31.570113    3649 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:24:31.570133    3649 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0815 16:24:31.573295    3649 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 16:24:31.573376    3649 cni.go:84] Creating CNI manager for ""
	I0815 16:24:31.573385    3649 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0815 16:24:31.573458    3649 start.go:340] cluster config:
	{Name:ha-138000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-138000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-t
iller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 16:24:31.573576    3649 iso.go:125] acquiring lock: {Name:mkb93fc66b60be15dffa9eb2999a4be8d8d4d218 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 16:24:31.616257    3649 out.go:177] * Starting "ha-138000" primary control-plane node in "ha-138000" cluster
	I0815 16:24:31.636985    3649 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 16:24:31.637060    3649 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19452-977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0815 16:24:31.637085    3649 cache.go:56] Caching tarball of preloaded images
	I0815 16:24:31.637273    3649 preload.go:172] Found /Users/jenkins/minikube-integration/19452-977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0815 16:24:31.637292    3649 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 16:24:31.637487    3649 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/config.json ...
	I0815 16:24:31.638371    3649 start.go:360] acquireMachinesLock for ha-138000: {Name:mkac8034c8a60617271f2754a03dcaf74fc4fef7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 16:24:31.638490    3649 start.go:364] duration metric: took 82.356µs to acquireMachinesLock for "ha-138000"
	I0815 16:24:31.638525    3649 start.go:96] Skipping create...Using existing machine configuration
	I0815 16:24:31.638544    3649 fix.go:54] fixHost starting: 
	I0815 16:24:31.638958    3649 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:24:31.639008    3649 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:24:31.648062    3649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52026
	I0815 16:24:31.648421    3649 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:24:31.648791    3649 main.go:141] libmachine: Using API Version  1
	I0815 16:24:31.648804    3649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:24:31.649022    3649 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:24:31.649142    3649 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:24:31.649278    3649 main.go:141] libmachine: (ha-138000) Calling .GetState
	I0815 16:24:31.649372    3649 main.go:141] libmachine: (ha-138000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:24:31.649446    3649 main.go:141] libmachine: (ha-138000) DBG | hyperkit pid from json: 3071
	I0815 16:24:31.650352    3649 main.go:141] libmachine: (ha-138000) DBG | hyperkit pid 3071 missing from process table
	I0815 16:24:31.650388    3649 fix.go:112] recreateIfNeeded on ha-138000: state=Stopped err=<nil>
	I0815 16:24:31.650403    3649 main.go:141] libmachine: (ha-138000) Calling .DriverName
	W0815 16:24:31.650489    3649 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 16:24:31.698042    3649 out.go:177] * Restarting existing hyperkit VM for "ha-138000" ...
	I0815 16:24:31.718584    3649 main.go:141] libmachine: (ha-138000) Calling .Start
	I0815 16:24:31.718879    3649 main.go:141] libmachine: (ha-138000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:24:31.718940    3649 main.go:141] libmachine: (ha-138000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/hyperkit.pid
	I0815 16:24:31.721002    3649 main.go:141] libmachine: (ha-138000) DBG | hyperkit pid 3071 missing from process table
	I0815 16:24:31.721020    3649 main.go:141] libmachine: (ha-138000) DBG | pid 3071 is in state "Stopped"
	I0815 16:24:31.721044    3649 main.go:141] libmachine: (ha-138000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/hyperkit.pid...
	I0815 16:24:31.721441    3649 main.go:141] libmachine: (ha-138000) DBG | Using UUID bf1b12d0-37a9-4c04-a028-0dd0a6dcd337
	I0815 16:24:31.829003    3649 main.go:141] libmachine: (ha-138000) DBG | Generated MAC 66:4d:cd:54:35:15
	I0815 16:24:31.829029    3649 main.go:141] libmachine: (ha-138000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000
	I0815 16:24:31.829133    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:31 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"bf1b12d0-37a9-4c04-a028-0dd0a6dcd337", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c24e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0815 16:24:31.829169    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:31 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"bf1b12d0-37a9-4c04-a028-0dd0a6dcd337", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c24e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0815 16:24:31.829203    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:31 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "bf1b12d0-37a9-4c04-a028-0dd0a6dcd337", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/ha-138000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/tty,log=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/bzimage,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/initrd,earlyprintk=serial l
oglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000"}
	I0815 16:24:31.829238    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:31 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U bf1b12d0-37a9-4c04-a028-0dd0a6dcd337 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/ha-138000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/tty,log=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/console-ring -f kexec,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/bzimage,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset noresto
re waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000"
	I0815 16:24:31.829247    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:31 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0815 16:24:31.830765    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:31 DEBUG: hyperkit: Pid is 3662
	I0815 16:24:31.831139    3649 main.go:141] libmachine: (ha-138000) DBG | Attempt 0
	I0815 16:24:31.831155    3649 main.go:141] libmachine: (ha-138000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:24:31.831242    3649 main.go:141] libmachine: (ha-138000) DBG | hyperkit pid from json: 3662
	I0815 16:24:31.832840    3649 main.go:141] libmachine: (ha-138000) DBG | Searching for 66:4d:cd:54:35:15 in /var/db/dhcpd_leases ...
	I0815 16:24:31.832917    3649 main.go:141] libmachine: (ha-138000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0815 16:24:31.832934    3649 main.go:141] libmachine: (ha-138000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be8e15}
	I0815 16:24:31.832943    3649 main.go:141] libmachine: (ha-138000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfdf74}
	I0815 16:24:31.832962    3649 main.go:141] libmachine: (ha-138000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfdedc}
	I0815 16:24:31.832970    3649 main.go:141] libmachine: (ha-138000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfde64}
	I0815 16:24:31.832977    3649 main.go:141] libmachine: (ha-138000) DBG | Found match: 66:4d:cd:54:35:15
	I0815 16:24:31.833028    3649 main.go:141] libmachine: (ha-138000) DBG | IP: 192.169.0.5
	I0815 16:24:31.833038    3649 main.go:141] libmachine: (ha-138000) Calling .GetConfigRaw
	I0815 16:24:31.833705    3649 main.go:141] libmachine: (ha-138000) Calling .GetIP
	I0815 16:24:31.833895    3649 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/config.json ...
	I0815 16:24:31.834359    3649 machine.go:93] provisionDockerMachine start ...
	I0815 16:24:31.834370    3649 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:24:31.834509    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:24:31.834611    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:24:31.834733    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:31.834881    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:31.834976    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:24:31.835114    3649 main.go:141] libmachine: Using SSH client type: native
	I0815 16:24:31.835296    3649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x42caea0] 0x42cdc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0815 16:24:31.835304    3649 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 16:24:31.838795    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:31 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0815 16:24:31.891055    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:31 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0815 16:24:31.891732    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0815 16:24:31.891746    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0815 16:24:31.891753    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0815 16:24:31.891763    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0815 16:24:32.275543    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:32 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0815 16:24:32.275556    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:32 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0815 16:24:32.390162    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0815 16:24:32.390181    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0815 16:24:32.390193    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0815 16:24:32.390217    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0815 16:24:32.391060    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:32 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0815 16:24:32.391070    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:32 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0815 16:24:37.953601    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:37 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0815 16:24:37.953741    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:37 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0815 16:24:37.953751    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:37 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0815 16:24:37.980241    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:37 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0815 16:24:42.910400    3649 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 16:24:42.910418    3649 main.go:141] libmachine: (ha-138000) Calling .GetMachineName
	I0815 16:24:42.910559    3649 buildroot.go:166] provisioning hostname "ha-138000"
	I0815 16:24:42.910571    3649 main.go:141] libmachine: (ha-138000) Calling .GetMachineName
	I0815 16:24:42.910673    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:24:42.910777    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:24:42.910859    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:42.910959    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:42.911045    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:24:42.911177    3649 main.go:141] libmachine: Using SSH client type: native
	I0815 16:24:42.911343    3649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x42caea0] 0x42cdc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0815 16:24:42.911352    3649 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-138000 && echo "ha-138000" | sudo tee /etc/hostname
	I0815 16:24:42.985179    3649 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-138000
	
	I0815 16:24:42.985199    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:24:42.985338    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:24:42.985446    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:42.985538    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:42.985614    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:24:42.985749    3649 main.go:141] libmachine: Using SSH client type: native
	I0815 16:24:42.985891    3649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x42caea0] 0x42cdc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0815 16:24:42.985905    3649 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-138000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-138000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-138000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 16:24:43.055472    3649 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 16:24:43.055491    3649 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19452-977/.minikube CaCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19452-977/.minikube}
	I0815 16:24:43.055508    3649 buildroot.go:174] setting up certificates
	I0815 16:24:43.055515    3649 provision.go:84] configureAuth start
	I0815 16:24:43.055522    3649 main.go:141] libmachine: (ha-138000) Calling .GetMachineName
	I0815 16:24:43.055669    3649 main.go:141] libmachine: (ha-138000) Calling .GetIP
	I0815 16:24:43.055769    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:24:43.055868    3649 provision.go:143] copyHostCerts
	I0815 16:24:43.055901    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem
	I0815 16:24:43.055963    3649 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem, removing ...
	I0815 16:24:43.055971    3649 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem
	I0815 16:24:43.056106    3649 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem (1082 bytes)
	I0815 16:24:43.056322    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem
	I0815 16:24:43.056353    3649 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem, removing ...
	I0815 16:24:43.056358    3649 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem
	I0815 16:24:43.056432    3649 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem (1123 bytes)
	I0815 16:24:43.056583    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem
	I0815 16:24:43.056611    3649 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem, removing ...
	I0815 16:24:43.056615    3649 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem
	I0815 16:24:43.056681    3649 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem (1679 bytes)
	I0815 16:24:43.056840    3649 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem org=jenkins.ha-138000 san=[127.0.0.1 192.169.0.5 ha-138000 localhost minikube]
	I0815 16:24:43.121501    3649 provision.go:177] copyRemoteCerts
	I0815 16:24:43.121552    3649 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 16:24:43.121568    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:24:43.121697    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:24:43.121782    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:43.121880    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:24:43.121971    3649 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/id_rsa Username:docker}
	I0815 16:24:43.165154    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0815 16:24:43.165236    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 16:24:43.200018    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0815 16:24:43.200092    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0815 16:24:43.220757    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0815 16:24:43.220829    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 16:24:43.240667    3649 provision.go:87] duration metric: took 185.141163ms to configureAuth
	I0815 16:24:43.240680    3649 buildroot.go:189] setting minikube options for container-runtime
	I0815 16:24:43.240849    3649 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:24:43.240863    3649 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:24:43.240998    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:24:43.241100    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:24:43.241183    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:43.241273    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:43.241367    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:24:43.241484    3649 main.go:141] libmachine: Using SSH client type: native
	I0815 16:24:43.241652    3649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x42caea0] 0x42cdc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0815 16:24:43.241660    3649 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0815 16:24:43.302884    3649 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0815 16:24:43.302897    3649 buildroot.go:70] root file system type: tmpfs
	I0815 16:24:43.302965    3649 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0815 16:24:43.302977    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:24:43.303108    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:24:43.303198    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:43.303278    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:43.303364    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:24:43.303495    3649 main.go:141] libmachine: Using SSH client type: native
	I0815 16:24:43.303638    3649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x42caea0] 0x42cdc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0815 16:24:43.303683    3649 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0815 16:24:43.378222    3649 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0815 16:24:43.378246    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:24:43.378382    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:24:43.378461    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:43.378563    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:43.378649    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:24:43.378787    3649 main.go:141] libmachine: Using SSH client type: native
	I0815 16:24:43.378932    3649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x42caea0] 0x42cdc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0815 16:24:43.378946    3649 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0815 16:24:45.080555    3649 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0815 16:24:45.080572    3649 machine.go:96] duration metric: took 13.246248166s to provisionDockerMachine
	I0815 16:24:45.080585    3649 start.go:293] postStartSetup for "ha-138000" (driver="hyperkit")
	I0815 16:24:45.080595    3649 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 16:24:45.080616    3649 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:24:45.080791    3649 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 16:24:45.080805    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:24:45.080908    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:24:45.080996    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:45.081081    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:24:45.081171    3649 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/id_rsa Username:docker}
	I0815 16:24:45.119742    3649 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 16:24:45.122978    3649 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 16:24:45.122994    3649 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19452-977/.minikube/addons for local assets ...
	I0815 16:24:45.123095    3649 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19452-977/.minikube/files for local assets ...
	I0815 16:24:45.123274    3649 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> 14982.pem in /etc/ssl/certs
	I0815 16:24:45.123280    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> /etc/ssl/certs/14982.pem
	I0815 16:24:45.123473    3649 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 16:24:45.130896    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem --> /etc/ssl/certs/14982.pem (1708 bytes)
	I0815 16:24:45.150554    3649 start.go:296] duration metric: took 69.960327ms for postStartSetup
	I0815 16:24:45.150578    3649 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:24:45.150756    3649 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0815 16:24:45.150769    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:24:45.150849    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:24:45.150943    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:45.151041    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:24:45.151122    3649 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/id_rsa Username:docker}
	I0815 16:24:45.187860    3649 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0815 16:24:45.187918    3649 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0815 16:24:45.240522    3649 fix.go:56] duration metric: took 13.602028125s for fixHost
	I0815 16:24:45.240543    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:24:45.240694    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:24:45.240782    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:45.240866    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:45.240953    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:24:45.241079    3649 main.go:141] libmachine: Using SSH client type: native
	I0815 16:24:45.241222    3649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x42caea0] 0x42cdc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0815 16:24:45.241230    3649 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 16:24:45.308498    3649 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723764285.546205797
	
	I0815 16:24:45.308509    3649 fix.go:216] guest clock: 1723764285.546205797
	I0815 16:24:45.308515    3649 fix.go:229] Guest: 2024-08-15 16:24:45.546205797 -0700 PDT Remote: 2024-08-15 16:24:45.240533 -0700 PDT m=+14.043250910 (delta=305.672797ms)
	I0815 16:24:45.308536    3649 fix.go:200] guest clock delta is within tolerance: 305.672797ms
	I0815 16:24:45.308540    3649 start.go:83] releasing machines lock for "ha-138000", held for 13.670085598s
	I0815 16:24:45.308562    3649 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:24:45.308691    3649 main.go:141] libmachine: (ha-138000) Calling .GetIP
	I0815 16:24:45.308815    3649 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:24:45.309125    3649 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:24:45.309228    3649 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:24:45.309333    3649 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 16:24:45.309348    3649 ssh_runner.go:195] Run: cat /version.json
	I0815 16:24:45.309359    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:24:45.309374    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:24:45.309454    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:24:45.309481    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:24:45.309570    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:45.309586    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:45.309666    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:24:45.309673    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:24:45.309753    3649 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/id_rsa Username:docker}
	I0815 16:24:45.309764    3649 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/id_rsa Username:docker}
	I0815 16:24:45.353596    3649 ssh_runner.go:195] Run: systemctl --version
	I0815 16:24:45.358729    3649 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 16:24:45.412525    3649 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 16:24:45.412627    3649 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 16:24:45.428066    3649 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 16:24:45.428077    3649 start.go:495] detecting cgroup driver to use...
	I0815 16:24:45.428183    3649 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 16:24:45.444602    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0815 16:24:45.453384    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0815 16:24:45.462134    3649 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0815 16:24:45.462180    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0815 16:24:45.470781    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 16:24:45.479385    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0815 16:24:45.487960    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 16:24:45.496691    3649 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 16:24:45.505669    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0815 16:24:45.514277    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0815 16:24:45.522851    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0815 16:24:45.531584    3649 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 16:24:45.539529    3649 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 16:24:45.547375    3649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:24:45.642699    3649 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0815 16:24:45.657803    3649 start.go:495] detecting cgroup driver to use...
	I0815 16:24:45.657881    3649 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0815 16:24:45.669244    3649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 16:24:45.680074    3649 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 16:24:45.692718    3649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 16:24:45.703066    3649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0815 16:24:45.713234    3649 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0815 16:24:45.735236    3649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0815 16:24:45.745677    3649 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 16:24:45.760852    3649 ssh_runner.go:195] Run: which cri-dockerd
	I0815 16:24:45.763929    3649 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0815 16:24:45.771021    3649 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0815 16:24:45.784172    3649 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0815 16:24:45.887215    3649 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0815 16:24:45.995634    3649 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0815 16:24:45.995716    3649 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0815 16:24:46.010389    3649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:24:46.126522    3649 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0815 16:24:48.464685    3649 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.338152009s)
	I0815 16:24:48.464761    3649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0815 16:24:48.475831    3649 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0815 16:24:48.490512    3649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0815 16:24:48.501692    3649 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0815 16:24:48.596754    3649 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0815 16:24:48.705379    3649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:24:48.807279    3649 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0815 16:24:48.821232    3649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0815 16:24:48.832145    3649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:24:48.931537    3649 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0815 16:24:48.994946    3649 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0815 16:24:48.995028    3649 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0815 16:24:48.999199    3649 start.go:563] Will wait 60s for crictl version
	I0815 16:24:48.999246    3649 ssh_runner.go:195] Run: which crictl
	I0815 16:24:49.002242    3649 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 16:24:49.031023    3649 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0815 16:24:49.031095    3649 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0815 16:24:49.049391    3649 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0815 16:24:49.110204    3649 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0815 16:24:49.110253    3649 main.go:141] libmachine: (ha-138000) Calling .GetIP
	I0815 16:24:49.110630    3649 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0815 16:24:49.114885    3649 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 16:24:49.125317    3649 kubeadm.go:883] updating cluster {Name:ha-138000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:ha-138000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false f
reshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 16:24:49.125409    3649 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 16:24:49.125461    3649 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0815 16:24:49.138389    3649 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0815 16:24:49.138400    3649 docker.go:615] Images already preloaded, skipping extraction
	I0815 16:24:49.138469    3649 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0815 16:24:49.152217    3649 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0815 16:24:49.152236    3649 cache_images.go:84] Images are preloaded, skipping loading
	I0815 16:24:49.152245    3649 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.0 docker true true} ...
	I0815 16:24:49.152316    3649 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-138000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-138000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 16:24:49.152387    3649 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0815 16:24:49.188207    3649 cni.go:84] Creating CNI manager for ""
	I0815 16:24:49.188219    3649 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0815 16:24:49.188233    3649 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 16:24:49.188247    3649 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-138000 NodeName:ha-138000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 16:24:49.188328    3649 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-138000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 16:24:49.188340    3649 kube-vip.go:115] generating kube-vip config ...
	I0815 16:24:49.188395    3649 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0815 16:24:49.201717    3649 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0815 16:24:49.201810    3649 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0815 16:24:49.201860    3649 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 16:24:49.210773    3649 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 16:24:49.210821    3649 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0815 16:24:49.218705    3649 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0815 16:24:49.232092    3649 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 16:24:49.245488    3649 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0815 16:24:49.259182    3649 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0815 16:24:49.272667    3649 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0815 16:24:49.275463    3649 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 16:24:49.285341    3649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:24:49.379165    3649 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 16:24:49.393690    3649 certs.go:68] Setting up /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000 for IP: 192.169.0.5
	I0815 16:24:49.393701    3649 certs.go:194] generating shared ca certs ...
	I0815 16:24:49.393711    3649 certs.go:226] acquiring lock for ca certs: {Name:mka1c88019f6064d2983ac988db71a67aeb65696 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:24:49.393886    3649 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.key
	I0815 16:24:49.393940    3649 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.key
	I0815 16:24:49.393952    3649 certs.go:256] generating profile certs ...
	I0815 16:24:49.394054    3649 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/client.key
	I0815 16:24:49.394074    3649 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.key.7af4c91a
	I0815 16:24:49.394091    3649 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.crt.7af4c91a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.7 192.169.0.254]
	I0815 16:24:49.771714    3649 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.crt.7af4c91a ...
	I0815 16:24:49.771738    3649 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.crt.7af4c91a: {Name:mkfdf96fafb98f174dadc5b6379869463c2a6ca3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:24:49.772085    3649 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.key.7af4c91a ...
	I0815 16:24:49.772094    3649 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.key.7af4c91a: {Name:mk0c2b233ae670508e502baf145f82fc5c8af979 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:24:49.772311    3649 certs.go:381] copying /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.crt.7af4c91a -> /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.crt
	I0815 16:24:49.772506    3649 certs.go:385] copying /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.key.7af4c91a -> /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.key
	I0815 16:24:49.772728    3649 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.key
	I0815 16:24:49.772737    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0815 16:24:49.772760    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0815 16:24:49.772779    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0815 16:24:49.772798    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0815 16:24:49.772818    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0815 16:24:49.772836    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0815 16:24:49.772855    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0815 16:24:49.772873    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0815 16:24:49.772972    3649 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498.pem (1338 bytes)
	W0815 16:24:49.773012    3649 certs.go:480] ignoring /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498_empty.pem, impossibly tiny 0 bytes
	I0815 16:24:49.773021    3649 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 16:24:49.773066    3649 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem (1082 bytes)
	I0815 16:24:49.773106    3649 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem (1123 bytes)
	I0815 16:24:49.773135    3649 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem (1679 bytes)
	I0815 16:24:49.773201    3649 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem (1708 bytes)
	I0815 16:24:49.773235    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> /usr/share/ca-certificates/14982.pem
	I0815 16:24:49.773257    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:24:49.773276    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498.pem -> /usr/share/ca-certificates/1498.pem
	I0815 16:24:49.773761    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 16:24:49.799905    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 16:24:49.819857    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 16:24:49.839446    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 16:24:49.859479    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0815 16:24:49.878979    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 16:24:49.898857    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 16:24:49.918488    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 16:24:49.938289    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem --> /usr/share/ca-certificates/14982.pem (1708 bytes)
	I0815 16:24:49.958067    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 16:24:49.977508    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498.pem --> /usr/share/ca-certificates/1498.pem (1338 bytes)
	I0815 16:24:49.997111    3649 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 16:24:50.010415    3649 ssh_runner.go:195] Run: openssl version
	I0815 16:24:50.014564    3649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14982.pem && ln -fs /usr/share/ca-certificates/14982.pem /etc/ssl/certs/14982.pem"
	I0815 16:24:50.022762    3649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14982.pem
	I0815 16:24:50.025974    3649 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 23:14 /usr/share/ca-certificates/14982.pem
	I0815 16:24:50.026012    3649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14982.pem
	I0815 16:24:50.030247    3649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14982.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 16:24:50.038688    3649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 16:24:50.046935    3649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:24:50.050205    3649 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:05 /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:24:50.050240    3649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:24:50.054437    3649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 16:24:50.062668    3649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1498.pem && ln -fs /usr/share/ca-certificates/1498.pem /etc/ssl/certs/1498.pem"
	I0815 16:24:50.070835    3649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1498.pem
	I0815 16:24:50.074144    3649 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 23:14 /usr/share/ca-certificates/1498.pem
	I0815 16:24:50.074179    3649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1498.pem
	I0815 16:24:50.078407    3649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1498.pem /etc/ssl/certs/51391683.0"
	I0815 16:24:50.087458    3649 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 16:24:50.090800    3649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 16:24:50.095300    3649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 16:24:50.099454    3649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 16:24:50.104181    3649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 16:24:50.108451    3649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 16:24:50.112679    3649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 16:24:50.116963    3649 kubeadm.go:392] StartCluster: {Name:ha-138000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:ha-138000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 16:24:50.117082    3649 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0815 16:24:50.130554    3649 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 16:24:50.137992    3649 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 16:24:50.138004    3649 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 16:24:50.138048    3649 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 16:24:50.145558    3649 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 16:24:50.145859    3649 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-138000" does not appear in /Users/jenkins/minikube-integration/19452-977/kubeconfig
	I0815 16:24:50.145940    3649 kubeconfig.go:62] /Users/jenkins/minikube-integration/19452-977/kubeconfig needs updating (will repair): [kubeconfig missing "ha-138000" cluster setting kubeconfig missing "ha-138000" context setting]
	I0815 16:24:50.146137    3649 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-977/kubeconfig: {Name:mk0f04b0199f84dc20eb294d5b790451b41e43fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:24:50.146558    3649 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19452-977/kubeconfig
	I0815 16:24:50.146752    3649 kapi.go:59] client config for ha-138000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/client.key", CAFile:"/Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Use
rAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x5983f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0815 16:24:50.147060    3649 cert_rotation.go:140] Starting client certificate rotation controller
	I0815 16:24:50.147235    3649 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 16:24:50.154308    3649 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0815 16:24:50.154320    3649 kubeadm.go:597] duration metric: took 16.312125ms to restartPrimaryControlPlane
	I0815 16:24:50.154325    3649 kubeadm.go:394] duration metric: took 37.367941ms to StartCluster
	I0815 16:24:50.154333    3649 settings.go:142] acquiring lock: {Name:mk694dad19d37394fa6b13c51a7dc54b62e97c12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:24:50.154408    3649 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19452-977/kubeconfig
	I0815 16:24:50.154767    3649 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-977/kubeconfig: {Name:mk0f04b0199f84dc20eb294d5b790451b41e43fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:24:50.154992    3649 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 16:24:50.155005    3649 start.go:241] waiting for startup goroutines ...
	I0815 16:24:50.155016    3649 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 16:24:50.155148    3649 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:24:50.196433    3649 out.go:177] * Enabled addons: 
	I0815 16:24:50.217474    3649 addons.go:510] duration metric: took 62.454726ms for enable addons: enabled=[]
	I0815 16:24:50.217512    3649 start.go:246] waiting for cluster config update ...
	I0815 16:24:50.217524    3649 start.go:255] writing updated cluster config ...
	I0815 16:24:50.239613    3649 out.go:201] 
	I0815 16:24:50.260810    3649 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:24:50.260937    3649 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/config.json ...
	I0815 16:24:50.282712    3649 out.go:177] * Starting "ha-138000-m02" control-plane node in "ha-138000" cluster
	I0815 16:24:50.324521    3649 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 16:24:50.324584    3649 cache.go:56] Caching tarball of preloaded images
	I0815 16:24:50.324754    3649 preload.go:172] Found /Users/jenkins/minikube-integration/19452-977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0815 16:24:50.324772    3649 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 16:24:50.324901    3649 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/config.json ...
	I0815 16:24:50.325802    3649 start.go:360] acquireMachinesLock for ha-138000-m02: {Name:mkac8034c8a60617271f2754a03dcaf74fc4fef7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 16:24:50.325911    3649 start.go:364] duration metric: took 84.439µs to acquireMachinesLock for "ha-138000-m02"
	I0815 16:24:50.325938    3649 start.go:96] Skipping create...Using existing machine configuration
	I0815 16:24:50.325946    3649 fix.go:54] fixHost starting: m02
	I0815 16:24:50.326424    3649 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:24:50.326451    3649 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:24:50.335682    3649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52048
	I0815 16:24:50.336051    3649 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:24:50.336443    3649 main.go:141] libmachine: Using API Version  1
	I0815 16:24:50.336459    3649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:24:50.336675    3649 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:24:50.336791    3649 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:24:50.336888    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetState
	I0815 16:24:50.336961    3649 main.go:141] libmachine: (ha-138000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:24:50.337044    3649 main.go:141] libmachine: (ha-138000-m02) DBG | hyperkit pid from json: 3600
	I0815 16:24:50.337930    3649 main.go:141] libmachine: (ha-138000-m02) DBG | hyperkit pid 3600 missing from process table
	I0815 16:24:50.337962    3649 fix.go:112] recreateIfNeeded on ha-138000-m02: state=Stopped err=<nil>
	I0815 16:24:50.337972    3649 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	W0815 16:24:50.338053    3649 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 16:24:50.379676    3649 out.go:177] * Restarting existing hyperkit VM for "ha-138000-m02" ...
	I0815 16:24:50.400415    3649 main.go:141] libmachine: (ha-138000-m02) Calling .Start
	I0815 16:24:50.400691    3649 main.go:141] libmachine: (ha-138000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:24:50.400747    3649 main.go:141] libmachine: (ha-138000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/hyperkit.pid
	I0815 16:24:50.402488    3649 main.go:141] libmachine: (ha-138000-m02) DBG | hyperkit pid 3600 missing from process table
	I0815 16:24:50.402502    3649 main.go:141] libmachine: (ha-138000-m02) DBG | pid 3600 is in state "Stopped"
	I0815 16:24:50.402518    3649 main.go:141] libmachine: (ha-138000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/hyperkit.pid...
	I0815 16:24:50.402857    3649 main.go:141] libmachine: (ha-138000-m02) DBG | Using UUID 4cff9b5a-9fe3-4215-9139-05f05b79bce3
	I0815 16:24:50.432166    3649 main.go:141] libmachine: (ha-138000-m02) DBG | Generated MAC 9a:c2:e9:d7:1c:58
	I0815 16:24:50.432194    3649 main.go:141] libmachine: (ha-138000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000
	I0815 16:24:50.432283    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"4cff9b5a-9fe3-4215-9139-05f05b79bce3", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003b06c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0815 16:24:50.432316    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"4cff9b5a-9fe3-4215-9139-05f05b79bce3", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003b06c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0815 16:24:50.432360    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "4cff9b5a-9fe3-4215-9139-05f05b79bce3", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/ha-138000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/tty,log=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/bzimage,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-13
8000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000"}
	I0815 16:24:50.432400    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 4cff9b5a-9fe3-4215-9139-05f05b79bce3 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/ha-138000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/tty,log=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/bzimage,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 co
nsole=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000"
	I0815 16:24:50.432410    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0815 16:24:50.433800    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 DEBUG: hyperkit: Pid is 3670
	I0815 16:24:50.434270    3649 main.go:141] libmachine: (ha-138000-m02) DBG | Attempt 0
	I0815 16:24:50.434284    3649 main.go:141] libmachine: (ha-138000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:24:50.434361    3649 main.go:141] libmachine: (ha-138000-m02) DBG | hyperkit pid from json: 3670
	I0815 16:24:50.436313    3649 main.go:141] libmachine: (ha-138000-m02) DBG | Searching for 9a:c2:e9:d7:1c:58 in /var/db/dhcpd_leases ...
	I0815 16:24:50.436365    3649 main.go:141] libmachine: (ha-138000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0815 16:24:50.436381    3649 main.go:141] libmachine: (ha-138000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfdfb9}
	I0815 16:24:50.436395    3649 main.go:141] libmachine: (ha-138000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be8e15}
	I0815 16:24:50.436408    3649 main.go:141] libmachine: (ha-138000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfdf74}
	I0815 16:24:50.436429    3649 main.go:141] libmachine: (ha-138000-m02) DBG | Found match: 9a:c2:e9:d7:1c:58
	I0815 16:24:50.436463    3649 main.go:141] libmachine: (ha-138000-m02) DBG | IP: 192.169.0.6
	I0815 16:24:50.436476    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetConfigRaw
	I0815 16:24:50.437131    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetIP
	I0815 16:24:50.437308    3649 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/config.json ...
	I0815 16:24:50.437758    3649 machine.go:93] provisionDockerMachine start ...
	I0815 16:24:50.437768    3649 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:24:50.437887    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:24:50.437997    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:24:50.438094    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:24:50.438199    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:24:50.438287    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:24:50.438398    3649 main.go:141] libmachine: Using SSH client type: native
	I0815 16:24:50.438546    3649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x42caea0] 0x42cdc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0815 16:24:50.438554    3649 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 16:24:50.441514    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0815 16:24:50.450166    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0815 16:24:50.451006    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0815 16:24:50.451024    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0815 16:24:50.451053    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0815 16:24:50.451081    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0815 16:24:50.836828    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0815 16:24:50.836848    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0815 16:24:50.951307    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0815 16:24:50.951325    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0815 16:24:50.951354    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0815 16:24:50.951377    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0815 16:24:50.952254    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0815 16:24:50.952268    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0815 16:24:56.551926    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:56 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0815 16:24:56.551945    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:56 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0815 16:24:56.551957    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:56 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0815 16:24:56.576187    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:56 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0815 16:25:25.506687    3649 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 16:25:25.506701    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetMachineName
	I0815 16:25:25.506833    3649 buildroot.go:166] provisioning hostname "ha-138000-m02"
	I0815 16:25:25.506845    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetMachineName
	I0815 16:25:25.506942    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:25:25.507027    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:25:25.507110    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:25.507196    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:25.507274    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:25:25.507413    3649 main.go:141] libmachine: Using SSH client type: native
	I0815 16:25:25.507576    3649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x42caea0] 0x42cdc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0815 16:25:25.507586    3649 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-138000-m02 && echo "ha-138000-m02" | sudo tee /etc/hostname
	I0815 16:25:25.578727    3649 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-138000-m02
	
	I0815 16:25:25.578742    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:25:25.578877    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:25:25.578967    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:25.579045    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:25.579129    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:25:25.579269    3649 main.go:141] libmachine: Using SSH client type: native
	I0815 16:25:25.579419    3649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x42caea0] 0x42cdc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0815 16:25:25.579432    3649 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-138000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-138000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-138000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 16:25:25.645270    3649 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 16:25:25.645285    3649 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19452-977/.minikube CaCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19452-977/.minikube}
	I0815 16:25:25.645301    3649 buildroot.go:174] setting up certificates
	I0815 16:25:25.645307    3649 provision.go:84] configureAuth start
	I0815 16:25:25.645342    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetMachineName
	I0815 16:25:25.645472    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetIP
	I0815 16:25:25.645569    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:25:25.645659    3649 provision.go:143] copyHostCerts
	I0815 16:25:25.645686    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem
	I0815 16:25:25.645746    3649 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem, removing ...
	I0815 16:25:25.645752    3649 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem
	I0815 16:25:25.645910    3649 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem (1082 bytes)
	I0815 16:25:25.646118    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem
	I0815 16:25:25.646164    3649 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem, removing ...
	I0815 16:25:25.646169    3649 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem
	I0815 16:25:25.646253    3649 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem (1123 bytes)
	I0815 16:25:25.646420    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem
	I0815 16:25:25.646496    3649 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem, removing ...
	I0815 16:25:25.646504    3649 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem
	I0815 16:25:25.646598    3649 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem (1679 bytes)
	I0815 16:25:25.646765    3649 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem org=jenkins.ha-138000-m02 san=[127.0.0.1 192.169.0.6 ha-138000-m02 localhost minikube]
	I0815 16:25:25.825658    3649 provision.go:177] copyRemoteCerts
	I0815 16:25:25.825707    3649 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 16:25:25.825722    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:25:25.825863    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:25:25.825953    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:25.826053    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:25:25.826140    3649 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/id_rsa Username:docker}
	I0815 16:25:25.862344    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0815 16:25:25.862417    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 16:25:25.882572    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0815 16:25:25.882639    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 16:25:25.902404    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0815 16:25:25.902470    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0815 16:25:25.922317    3649 provision.go:87] duration metric: took 277.0023ms to configureAuth
	I0815 16:25:25.922332    3649 buildroot.go:189] setting minikube options for container-runtime
	I0815 16:25:25.922512    3649 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:25:25.922526    3649 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:25:25.922660    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:25:25.922753    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:25:25.922847    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:25.922931    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:25.923029    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:25:25.923140    3649 main.go:141] libmachine: Using SSH client type: native
	I0815 16:25:25.923269    3649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x42caea0] 0x42cdc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0815 16:25:25.923277    3649 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0815 16:25:25.984805    3649 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0815 16:25:25.984816    3649 buildroot.go:70] root file system type: tmpfs
	I0815 16:25:25.984938    3649 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0815 16:25:25.984949    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:25:25.985083    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:25:25.985169    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:25.985249    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:25.985329    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:25:25.985450    3649 main.go:141] libmachine: Using SSH client type: native
	I0815 16:25:25.985607    3649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x42caea0] 0x42cdc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0815 16:25:25.985653    3649 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0815 16:25:26.056607    3649 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0815 16:25:26.056625    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:25:26.056761    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:25:26.056863    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:26.056957    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:26.057043    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:25:26.057179    3649 main.go:141] libmachine: Using SSH client type: native
	I0815 16:25:26.057326    3649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x42caea0] 0x42cdc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0815 16:25:26.057338    3649 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0815 16:25:27.732286    3649 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0815 16:25:27.732301    3649 machine.go:96] duration metric: took 37.294661422s to provisionDockerMachine
	I0815 16:25:27.732309    3649 start.go:293] postStartSetup for "ha-138000-m02" (driver="hyperkit")
	I0815 16:25:27.732317    3649 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 16:25:27.732327    3649 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:25:27.732516    3649 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 16:25:27.732528    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:25:27.732625    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:25:27.732731    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:27.732809    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:25:27.732896    3649 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/id_rsa Username:docker}
	I0815 16:25:27.769243    3649 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 16:25:27.772355    3649 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 16:25:27.772366    3649 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19452-977/.minikube/addons for local assets ...
	I0815 16:25:27.772467    3649 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19452-977/.minikube/files for local assets ...
	I0815 16:25:27.772656    3649 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> 14982.pem in /etc/ssl/certs
	I0815 16:25:27.772668    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> /etc/ssl/certs/14982.pem
	I0815 16:25:27.772873    3649 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 16:25:27.780868    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem --> /etc/ssl/certs/14982.pem (1708 bytes)
	I0815 16:25:27.799647    3649 start.go:296] duration metric: took 67.329668ms for postStartSetup
	I0815 16:25:27.799668    3649 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:25:27.799829    3649 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0815 16:25:27.799842    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:25:27.799928    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:25:27.800000    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:27.800074    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:25:27.800149    3649 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/id_rsa Username:docker}
	I0815 16:25:27.837218    3649 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0815 16:25:27.837277    3649 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0815 16:25:27.871532    3649 fix.go:56] duration metric: took 37.545710837s for fixHost
	I0815 16:25:27.871559    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:25:27.871714    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:25:27.871806    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:27.871884    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:27.871974    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:25:27.872101    3649 main.go:141] libmachine: Using SSH client type: native
	I0815 16:25:27.872250    3649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x42caea0] 0x42cdc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0815 16:25:27.872257    3649 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 16:25:27.932451    3649 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723764328.172025914
	
	I0815 16:25:27.932464    3649 fix.go:216] guest clock: 1723764328.172025914
	I0815 16:25:27.932470    3649 fix.go:229] Guest: 2024-08-15 16:25:28.172025914 -0700 PDT Remote: 2024-08-15 16:25:27.871549 -0700 PDT m=+56.674410917 (delta=300.476914ms)
	I0815 16:25:27.932480    3649 fix.go:200] guest clock delta is within tolerance: 300.476914ms
	I0815 16:25:27.932484    3649 start.go:83] releasing machines lock for "ha-138000-m02", held for 37.606689063s
	I0815 16:25:27.932502    3649 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:25:27.932640    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetIP
	I0815 16:25:27.955698    3649 out.go:177] * Found network options:
	I0815 16:25:27.976977    3649 out.go:177]   - NO_PROXY=192.169.0.5
	W0815 16:25:27.997880    3649 proxy.go:119] fail to check proxy env: Error ip not in block
	I0815 16:25:27.997916    3649 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:25:27.998743    3649 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:25:27.998959    3649 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:25:27.999062    3649 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 16:25:27.999103    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	W0815 16:25:27.999149    3649 proxy.go:119] fail to check proxy env: Error ip not in block
	I0815 16:25:27.999255    3649 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0815 16:25:27.999276    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:25:27.999310    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:25:27.999538    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:25:27.999567    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:27.999751    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:27.999778    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:25:27.999890    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:25:27.999915    3649 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/id_rsa Username:docker}
	I0815 16:25:28.000017    3649 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/id_rsa Username:docker}
	W0815 16:25:28.032774    3649 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 16:25:28.032832    3649 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 16:25:28.085200    3649 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 16:25:28.085222    3649 start.go:495] detecting cgroup driver to use...
	I0815 16:25:28.085337    3649 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 16:25:28.101256    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0815 16:25:28.110461    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0815 16:25:28.119610    3649 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0815 16:25:28.119671    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0815 16:25:28.128841    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 16:25:28.137598    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0815 16:25:28.146542    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 16:25:28.155343    3649 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 16:25:28.164400    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0815 16:25:28.173324    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0815 16:25:28.182447    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0815 16:25:28.191439    3649 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 16:25:28.199534    3649 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 16:25:28.207385    3649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:25:28.307256    3649 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0815 16:25:28.326701    3649 start.go:495] detecting cgroup driver to use...
	I0815 16:25:28.326772    3649 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0815 16:25:28.345963    3649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 16:25:28.361865    3649 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 16:25:28.380032    3649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 16:25:28.392583    3649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0815 16:25:28.403338    3649 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0815 16:25:28.425534    3649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0815 16:25:28.435952    3649 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 16:25:28.450826    3649 ssh_runner.go:195] Run: which cri-dockerd
	I0815 16:25:28.453880    3649 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0815 16:25:28.461213    3649 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0815 16:25:28.474603    3649 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0815 16:25:28.569552    3649 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0815 16:25:28.669486    3649 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0815 16:25:28.669508    3649 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0815 16:25:28.684315    3649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:25:28.789048    3649 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0815 16:26:29.810459    3649 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.021600349s)
	I0815 16:26:29.810528    3649 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0815 16:26:29.846420    3649 out.go:201] 
	W0815 16:26:29.868048    3649 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 15 23:25:25 ha-138000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 15 23:25:25 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:25.509065819Z" level=info msg="Starting up"
	Aug 15 23:25:25 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:25.509592997Z" level=info msg="containerd not running, starting managed containerd"
	Aug 15 23:25:25 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:25.510095236Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=519
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.527964893Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.542679991Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.542751629Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.542813012Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.542847466Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.542971116Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.543022892Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.543226251Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.543273769Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.543307918Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.543342764Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.543453732Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.543640009Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.545258649Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.545308637Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.545445977Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.545492906Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.545600399Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.545650460Z" level=info msg="metadata content store policy set" policy=shared
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.547717368Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.547830207Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.547884234Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.548013412Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.548060318Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.548127353Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.548391092Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.552607490Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.552725748Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.552840021Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.552885041Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.552918051Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.552984961Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553030860Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553064737Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553096185Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553126522Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553162873Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553202352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553233572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553266178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553297774Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553327631Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553357374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553386246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553418283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553450098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553484562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553517795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553547301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553576466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553607695Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553650178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553684928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553713941Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553789004Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553836418Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553870209Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553907631Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.554030910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.554116351Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.554162242Z" level=info msg="NRI interface is disabled by configuration."
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.554425646Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.554560798Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.554647146Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.554690019Z" level=info msg="containerd successfully booted in 0.027466s"
	Aug 15 23:25:26 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:26.539092962Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 15 23:25:26 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:26.579801466Z" level=info msg="Loading containers: start."
	Aug 15 23:25:26 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:26.753629817Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 15 23:25:27 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:27.897778336Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 15 23:25:27 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:27.941918967Z" level=info msg="Loading containers: done."
	Aug 15 23:25:27 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:27.949162882Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	Aug 15 23:25:27 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:27.949300191Z" level=info msg="Daemon has completed initialization"
	Aug 15 23:25:27 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:27.970294492Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 15 23:25:27 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:27.970499353Z" level=info msg="API listen on [::]:2376"
	Aug 15 23:25:27 ha-138000-m02 systemd[1]: Started Docker Application Container Engine.
	Aug 15 23:25:29 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:29.040016751Z" level=info msg="Processing signal 'terminated'"
	Aug 15 23:25:29 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:29.040919337Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 15 23:25:29 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:29.041235066Z" level=info msg="Daemon shutdown complete"
	Aug 15 23:25:29 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:29.041343453Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 15 23:25:29 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:29.041349896Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 15 23:25:29 ha-138000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Aug 15 23:25:30 ha-138000-m02 systemd[1]: docker.service: Deactivated successfully.
	Aug 15 23:25:30 ha-138000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Aug 15 23:25:30 ha-138000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 15 23:25:30 ha-138000-m02 dockerd[1088]: time="2024-08-15T23:25:30.078915638Z" level=info msg="Starting up"
	Aug 15 23:26:30 ha-138000-m02 dockerd[1088]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 15 23:26:30 ha-138000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 15 23:26:30 ha-138000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 15 23:26:30 ha-138000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0815 16:26:29.868131    3649 out.go:270] * 
	W0815 16:26:29.869562    3649 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 16:26:29.930973    3649 out.go:201] 
	
	
	==> Docker <==
	Aug 15 23:25:38 ha-138000 dockerd[1164]: time="2024-08-15T23:25:38.961334482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:25:59 ha-138000 dockerd[1157]: time="2024-08-15T23:25:59.760572159Z" level=info msg="ignoring event" container=4745e33319a09d9bd5f6d9af75281ffc2c6681d2c3666cbb617dc23792e131ae module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 15 23:25:59 ha-138000 dockerd[1164]: time="2024-08-15T23:25:59.761266003Z" level=info msg="shim disconnected" id=4745e33319a09d9bd5f6d9af75281ffc2c6681d2c3666cbb617dc23792e131ae namespace=moby
	Aug 15 23:25:59 ha-138000 dockerd[1164]: time="2024-08-15T23:25:59.761315524Z" level=warning msg="cleaning up after shim disconnected" id=4745e33319a09d9bd5f6d9af75281ffc2c6681d2c3666cbb617dc23792e131ae namespace=moby
	Aug 15 23:25:59 ha-138000 dockerd[1164]: time="2024-08-15T23:25:59.761324344Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 15 23:25:59 ha-138000 dockerd[1157]: time="2024-08-15T23:25:59.792363842Z" level=info msg="ignoring event" container=b9045283b928de777b7f4d15d6de8027f3786562c871977d957e2dd9d2a95b29 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 15 23:25:59 ha-138000 dockerd[1164]: time="2024-08-15T23:25:59.792820549Z" level=info msg="shim disconnected" id=b9045283b928de777b7f4d15d6de8027f3786562c871977d957e2dd9d2a95b29 namespace=moby
	Aug 15 23:25:59 ha-138000 dockerd[1164]: time="2024-08-15T23:25:59.792922467Z" level=warning msg="cleaning up after shim disconnected" id=b9045283b928de777b7f4d15d6de8027f3786562c871977d957e2dd9d2a95b29 namespace=moby
	Aug 15 23:25:59 ha-138000 dockerd[1164]: time="2024-08-15T23:25:59.792961894Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 15 23:26:22 ha-138000 dockerd[1164]: time="2024-08-15T23:26:22.968000771Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 15 23:26:22 ha-138000 dockerd[1164]: time="2024-08-15T23:26:22.968069651Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 15 23:26:22 ha-138000 dockerd[1164]: time="2024-08-15T23:26:22.968081146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:26:22 ha-138000 dockerd[1164]: time="2024-08-15T23:26:22.968181768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:26:32 ha-138000 dockerd[1164]: time="2024-08-15T23:26:32.980665382Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 15 23:26:32 ha-138000 dockerd[1164]: time="2024-08-15T23:26:32.980751317Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 15 23:26:32 ha-138000 dockerd[1164]: time="2024-08-15T23:26:32.980764715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:26:32 ha-138000 dockerd[1164]: time="2024-08-15T23:26:32.980890517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:26:53 ha-138000 dockerd[1157]: time="2024-08-15T23:26:53.459040666Z" level=info msg="ignoring event" container=3067be70dd5080b978b1c43a941f337a0fe62fd963692f6a5910b7e9981d8828 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 15 23:26:53 ha-138000 dockerd[1164]: time="2024-08-15T23:26:53.459615849Z" level=info msg="shim disconnected" id=3067be70dd5080b978b1c43a941f337a0fe62fd963692f6a5910b7e9981d8828 namespace=moby
	Aug 15 23:26:53 ha-138000 dockerd[1164]: time="2024-08-15T23:26:53.459664466Z" level=warning msg="cleaning up after shim disconnected" id=3067be70dd5080b978b1c43a941f337a0fe62fd963692f6a5910b7e9981d8828 namespace=moby
	Aug 15 23:26:53 ha-138000 dockerd[1164]: time="2024-08-15T23:26:53.459673170Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 15 23:26:54 ha-138000 dockerd[1157]: time="2024-08-15T23:26:54.466022234Z" level=info msg="ignoring event" container=065b34908ec9822585d14c60d989cc0d488ea0a61e4d398a5b59f18afaf682bd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 15 23:26:54 ha-138000 dockerd[1164]: time="2024-08-15T23:26:54.466561396Z" level=info msg="shim disconnected" id=065b34908ec9822585d14c60d989cc0d488ea0a61e4d398a5b59f18afaf682bd namespace=moby
	Aug 15 23:26:54 ha-138000 dockerd[1164]: time="2024-08-15T23:26:54.467070687Z" level=warning msg="cleaning up after shim disconnected" id=065b34908ec9822585d14c60d989cc0d488ea0a61e4d398a5b59f18afaf682bd namespace=moby
	Aug 15 23:26:54 ha-138000 dockerd[1164]: time="2024-08-15T23:26:54.467080180Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3067be70dd508       604f5db92eaa8                                                                                         33 seconds ago      Exited              kube-apiserver            3                   7152268f8eec4       kube-apiserver-ha-138000
	065b34908ec98       045733566833c                                                                                         43 seconds ago      Exited              kube-controller-manager   3                   4650262cc9c5d       kube-controller-manager-ha-138000
	efbc09be8eda5       38af8ddebf499                                                                                         2 minutes ago       Running             kube-vip                  0                   0c665afd15e6f       kube-vip-ha-138000
	589038a9e36bd       2e96e5913fc06                                                                                         2 minutes ago       Running             etcd                      1                   ec285d4826baa       etcd-ha-138000
	ac6935271595c       1766f54c897f0                                                                                         2 minutes ago       Running             kube-scheduler            1                   07c1c62e41d3a       kube-scheduler-ha-138000
	8f20284cd3969       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   4 minutes ago       Exited              busybox                   0                   bfc975a528b9e       busybox-7dff88458-wgww9
	42f5d82b00417       cbb01a7bd410d                                                                                         7 minutes ago       Exited              coredns                   0                   10891f8fbffcc       coredns-6f6b679f8f-dmgt5
	3e8b806ef4f33       cbb01a7bd410d                                                                                         7 minutes ago       Exited              coredns                   0                   096ab15603b01       coredns-6f6b679f8f-zc8jj
	6a1122913bb18       6e38f40d628db                                                                                         7 minutes ago       Exited              storage-provisioner       0                   e30dde4a5a10d       storage-provisioner
	c2a16126718b3       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              7 minutes ago       Exited              kindnet-cni               0                   e260a94a203af       kindnet-77dc6
	fc2e141007efb       ad83b2ca7b09e                                                                                         7 minutes ago       Exited              kube-proxy                0                   5b40cdd6b2c24       kube-proxy-cznkn
	e919017e14bb9       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Exited              kube-vip                  0                   db3c88138b89a       kube-vip-ha-138000
	7c25fb975759b       1766f54c897f0                                                                                         7 minutes ago       Exited              kube-scheduler            0                   edd6b77fdd102       kube-scheduler-ha-138000
	0cde5d8b93f58       2e96e5913fc06                                                                                         7 minutes ago       Exited              etcd                      0                   d0d07c194103e       etcd-ha-138000
	
	
	==> coredns [3e8b806ef4f3] <==
	[INFO] 10.244.2.2:44773 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000075522s
	[INFO] 10.244.2.2:53805 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000098349s
	[INFO] 10.244.2.2:34369 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000122495s
	[INFO] 10.244.0.4:59671 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000077646s
	[INFO] 10.244.0.4:41185 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000079139s
	[INFO] 10.244.0.4:42405 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000092065s
	[INFO] 10.244.0.4:54373 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000049998s
	[INFO] 10.244.0.4:57169 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000050383s
	[INFO] 10.244.0.4:37825 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000085108s
	[INFO] 10.244.1.2:59685 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000072268s
	[INFO] 10.244.1.2:32923 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073054s
	[INFO] 10.244.2.2:50876 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000068102s
	[INFO] 10.244.2.2:54719 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000762s
	[INFO] 10.244.0.4:57395 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000091608s
	[INFO] 10.244.0.4:37936 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000031052s
	[INFO] 10.244.1.2:58408 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000088888s
	[INFO] 10.244.1.2:42731 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000114857s
	[INFO] 10.244.1.2:41638 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000082664s
	[INFO] 10.244.2.2:52666 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000092331s
	[INFO] 10.244.2.2:41501 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000093116s
	[INFO] 10.244.0.4:48200 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000075447s
	[INFO] 10.244.0.4:35056 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000091854s
	[INFO] 10.244.0.4:36257 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000057922s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [42f5d82b0041] <==
	[INFO] 10.244.1.2:50104 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.009876264s
	[INFO] 10.244.0.4:33653 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115506s
	[INFO] 10.244.0.4:45180 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000042438s
	[INFO] 10.244.1.2:60312 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000068925s
	[INFO] 10.244.1.2:38521 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000124425s
	[INFO] 10.244.1.2:51675 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000125646s
	[INFO] 10.244.1.2:33974 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000078827s
	[INFO] 10.244.2.2:38966 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000078816s
	[INFO] 10.244.2.2:56056 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000620092s
	[INFO] 10.244.2.2:32787 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000109221s
	[INFO] 10.244.2.2:55701 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000039601s
	[INFO] 10.244.0.4:52543 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000083971s
	[INFO] 10.244.0.4:55050 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000146353s
	[INFO] 10.244.1.2:52165 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100415s
	[INFO] 10.244.1.2:41123 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000060755s
	[INFO] 10.244.2.2:56460 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000087503s
	[INFO] 10.244.2.2:36407 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00009778s
	[INFO] 10.244.0.4:40764 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000037536s
	[INFO] 10.244.0.4:58473 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000029335s
	[INFO] 10.244.1.2:38640 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000118481s
	[INFO] 10.244.2.2:46151 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000117088s
	[INFO] 10.244.2.2:34054 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000108858s
	[INFO] 10.244.0.4:56735 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000069666s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0815 23:27:06.103797    3024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0815 23:27:06.105189    3024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0815 23:27:06.106707    3024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0815 23:27:06.107974    3024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0815 23:27:06.109219    3024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.035894] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.007974] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.668520] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.007326] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.770013] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +1.360688] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000015] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +1.372641] systemd-fstab-generator[478]: Ignoring "noauto" option for root device
	[  +0.100663] systemd-fstab-generator[490]: Ignoring "noauto" option for root device
	[  +1.991571] systemd-fstab-generator[1087]: Ignoring "noauto" option for root device
	[  +0.238035] systemd-fstab-generator[1123]: Ignoring "noauto" option for root device
	[  +0.057754] kauditd_printk_skb: 101 callbacks suppressed
	[  +0.054898] systemd-fstab-generator[1135]: Ignoring "noauto" option for root device
	[  +0.128367] systemd-fstab-generator[1149]: Ignoring "noauto" option for root device
	[  +2.481651] systemd-fstab-generator[1367]: Ignoring "noauto" option for root device
	[  +0.106020] systemd-fstab-generator[1379]: Ignoring "noauto" option for root device
	[  +0.099803] systemd-fstab-generator[1391]: Ignoring "noauto" option for root device
	[  +0.128219] systemd-fstab-generator[1406]: Ignoring "noauto" option for root device
	[  +0.443324] systemd-fstab-generator[1569]: Ignoring "noauto" option for root device
	[  +6.920369] kauditd_printk_skb: 212 callbacks suppressed
	[Aug15 23:25] kauditd_printk_skb: 40 callbacks suppressed
	
	
	==> etcd [0cde5d8b93f5] <==
	2024/08/15 23:24:23 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-15T23:24:23.473964Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"645.370724ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllerrevisions/\" range_end:\"/registry/controllerrevisions0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-08-15T23:24:23.473975Z","caller":"traceutil/trace.go:171","msg":"trace[1846477602] range","detail":"{range_begin:/registry/controllerrevisions/; range_end:/registry/controllerrevisions0; }","duration":"645.382858ms","start":"2024-08-15T23:24:22.828589Z","end":"2024-08-15T23:24:23.473972Z","steps":["trace[1846477602] 'agreement among raft nodes before linearized reading'  (duration: 645.370611ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T23:24:23.473985Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-15T23:24:22.828583Z","time spent":"645.398405ms","remote":"127.0.0.1:48088","response type":"/etcdserverpb.KV/Range","request count":0,"request size":66,"response count":0,"response size":0,"request content":"key:\"/registry/controllerrevisions/\" range_end:\"/registry/controllerrevisions0\" count_only:true "}
	2024/08/15 23:24:23 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-15T23:24:23.523484Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-15T23:24:23.523513Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-15T23:24:23.523614Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"b8c6c7563d17d844","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-15T23:24:23.526296Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"2399e7dba5b18dfe"}
	{"level":"info","ts":"2024-08-15T23:24:23.526315Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"2399e7dba5b18dfe"}
	{"level":"info","ts":"2024-08-15T23:24:23.526356Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"2399e7dba5b18dfe"}
	{"level":"info","ts":"2024-08-15T23:24:23.526410Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"2399e7dba5b18dfe"}
	{"level":"info","ts":"2024-08-15T23:24:23.526458Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"2399e7dba5b18dfe"}
	{"level":"info","ts":"2024-08-15T23:24:23.526483Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"2399e7dba5b18dfe"}
	{"level":"info","ts":"2024-08-15T23:24:23.526491Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"2399e7dba5b18dfe"}
	{"level":"info","ts":"2024-08-15T23:24:23.526495Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"c8daa22dc1df7d56"}
	{"level":"info","ts":"2024-08-15T23:24:23.526502Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"c8daa22dc1df7d56"}
	{"level":"info","ts":"2024-08-15T23:24:23.526513Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"c8daa22dc1df7d56"}
	{"level":"info","ts":"2024-08-15T23:24:23.526797Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c8daa22dc1df7d56"}
	{"level":"info","ts":"2024-08-15T23:24:23.526821Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c8daa22dc1df7d56"}
	{"level":"info","ts":"2024-08-15T23:24:23.526843Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c8daa22dc1df7d56"}
	{"level":"info","ts":"2024-08-15T23:24:23.526851Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"c8daa22dc1df7d56"}
	{"level":"info","ts":"2024-08-15T23:24:23.528360Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-08-15T23:24:23.528429Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-08-15T23:24:23.528442Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-138000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.5:2380"],"advertise-client-urls":["https://192.169.0.5:2379"]}
	
	
	==> etcd [589038a9e36b] <==
	{"level":"error","ts":"2024-08-15T23:27:01.225566Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]data_corruption ok\n[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: request timed out\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-08-15T23:27:01.955785Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-15T23:27:01.955873Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-15T23:27:01.955894Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-08-15T23:27:01.955913Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1864] sent MsgPreVote request to 2399e7dba5b18dfe at term 2"}
	{"level":"info","ts":"2024-08-15T23:27:01.956930Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1864] sent MsgPreVote request to c8daa22dc1df7d56 at term 2"}
	{"level":"warn","ts":"2024-08-15T23:27:02.240020Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c8daa22dc1df7d56","rtt":"0s","error":"dial tcp 192.169.0.7:2380: i/o timeout"}
	{"level":"warn","ts":"2024-08-15T23:27:02.240122Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"2399e7dba5b18dfe","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T23:27:02.240149Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"c8daa22dc1df7d56","rtt":"0s","error":"dial tcp 192.169.0.7:2380: i/o timeout"}
	{"level":"warn","ts":"2024-08-15T23:27:02.240076Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"2399e7dba5b18dfe","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T23:27:03.196724Z","caller":"etcdserver/server.go:2139","msg":"failed to publish local member to cluster through raft","local-member-id":"b8c6c7563d17d844","local-member-attributes":"{Name:ha-138000 ClientURLs:[https://192.169.0.5:2379]}","request-path":"/0/members/b8c6c7563d17d844/attributes","publish-timeout":"7s","error":"etcdserver: request timed out"}
	{"level":"info","ts":"2024-08-15T23:27:03.553996Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-15T23:27:03.554109Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-15T23:27:03.554129Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-08-15T23:27:03.554147Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1864] sent MsgPreVote request to 2399e7dba5b18dfe at term 2"}
	{"level":"info","ts":"2024-08-15T23:27:03.554158Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1864] sent MsgPreVote request to c8daa22dc1df7d56 at term 2"}
	{"level":"warn","ts":"2024-08-15T23:27:04.731665Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740419292119070,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-08-15T23:27:05.153207Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-15T23:27:05.153356Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-15T23:27:05.153366Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-08-15T23:27:05.153380Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1864] sent MsgPreVote request to 2399e7dba5b18dfe at term 2"}
	{"level":"info","ts":"2024-08-15T23:27:05.153385Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1864] sent MsgPreVote request to c8daa22dc1df7d56 at term 2"}
	{"level":"warn","ts":"2024-08-15T23:27:05.231898Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740419292119070,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-15T23:27:05.732233Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740419292119070,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-15T23:27:06.233314Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740419292119070,"retry-timeout":"500ms"}
	
	
	==> kernel <==
	 23:27:06 up 2 min,  0 users,  load average: 0.20, 0.15, 0.06
	Linux ha-138000 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [c2a16126718b] <==
	I0815 23:23:47.704130       1 main.go:322] Node ha-138000-m04 has CIDR [10.244.3.0/24] 
	I0815 23:23:57.712115       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0815 23:23:57.712139       1 main.go:299] handling current node
	I0815 23:23:57.712152       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0815 23:23:57.712157       1 main.go:322] Node ha-138000-m02 has CIDR [10.244.1.0/24] 
	I0815 23:23:57.712420       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0815 23:23:57.712543       1 main.go:322] Node ha-138000-m03 has CIDR [10.244.2.0/24] 
	I0815 23:23:57.712720       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0815 23:23:57.712823       1 main.go:322] Node ha-138000-m04 has CIDR [10.244.3.0/24] 
	I0815 23:24:07.712424       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0815 23:24:07.712474       1 main.go:299] handling current node
	I0815 23:24:07.712488       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0815 23:24:07.712494       1 main.go:322] Node ha-138000-m02 has CIDR [10.244.1.0/24] 
	I0815 23:24:07.712623       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0815 23:24:07.712704       1 main.go:322] Node ha-138000-m03 has CIDR [10.244.2.0/24] 
	I0815 23:24:07.712814       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0815 23:24:07.712851       1 main.go:322] Node ha-138000-m04 has CIDR [10.244.3.0/24] 
	I0815 23:24:17.705680       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0815 23:24:17.705716       1 main.go:322] Node ha-138000-m02 has CIDR [10.244.1.0/24] 
	I0815 23:24:17.706225       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0815 23:24:17.706282       1 main.go:322] Node ha-138000-m03 has CIDR [10.244.2.0/24] 
	I0815 23:24:17.706514       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0815 23:24:17.706582       1 main.go:322] Node ha-138000-m04 has CIDR [10.244.3.0/24] 
	I0815 23:24:17.706957       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0815 23:24:17.707108       1 main.go:299] handling current node
	
	
	==> kube-apiserver [3067be70dd50] <==
	I0815 23:26:33.075028       1 options.go:228] external host was not specified, using 192.169.0.5
	I0815 23:26:33.076272       1 server.go:142] Version: v1.31.0
	I0815 23:26:33.076309       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 23:26:33.428918       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0815 23:26:33.438505       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0815 23:26:33.441482       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0815 23:26:33.441514       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0815 23:26:33.441724       1 instance.go:232] Using reconciler: lease
	W0815 23:26:53.430994       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0815 23:26:53.431041       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0815 23:26:53.442642       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0815 23:26:53.442682       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [065b34908ec9] <==
	I0815 23:26:23.285232       1 serving.go:386] Generated self-signed cert in-memory
	I0815 23:26:23.782222       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0815 23:26:23.782291       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 23:26:23.783394       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0815 23:26:23.783594       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0815 23:26:23.783712       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0815 23:26:23.783867       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0815 23:26:54.448628       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.169.0.5:8443/healthz\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:34400->192.169.0.5:8443: read: connection reset by peer"
	
	
	==> kube-proxy [fc2e141007ef] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 23:19:33.922056       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0815 23:19:33.939645       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0815 23:19:33.939881       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 23:19:33.966815       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0815 23:19:33.966963       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0815 23:19:33.967061       1 server_linux.go:169] "Using iptables Proxier"
	I0815 23:19:33.969119       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 23:19:33.969437       1 server.go:483] "Version info" version="v1.31.0"
	I0815 23:19:33.969466       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 23:19:33.970289       1 config.go:197] "Starting service config controller"
	I0815 23:19:33.970403       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 23:19:33.970441       1 config.go:104] "Starting endpoint slice config controller"
	I0815 23:19:33.970446       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 23:19:33.970870       1 config.go:326] "Starting node config controller"
	I0815 23:19:33.970895       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 23:19:34.070930       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0815 23:19:34.070930       1 shared_informer.go:320] Caches are synced for node config
	I0815 23:19:34.070944       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [7c25fb975759] <==
	E0815 23:19:26.587225       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0815 23:19:27.147361       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0815 23:22:08.672878       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-t5sdh\": pod busybox-7dff88458-t5sdh is already assigned to node \"ha-138000-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-t5sdh" node="ha-138000-m03"
	E0815 23:22:08.672963       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod b81fe134-5ef5-4074-920a-105e4bd801be(default/busybox-7dff88458-t5sdh) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-t5sdh"
	E0815 23:22:08.672983       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-t5sdh\": pod busybox-7dff88458-t5sdh is already assigned to node \"ha-138000-m03\"" pod="default/busybox-7dff88458-t5sdh"
	I0815 23:22:08.673000       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-t5sdh" node="ha-138000-m03"
	E0815 23:22:08.673278       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-wgww9\": pod busybox-7dff88458-wgww9 is already assigned to node \"ha-138000\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-wgww9" node="ha-138000"
	E0815 23:22:08.673460       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod b8eb799e-e761-4647-8aae-388c38bc936e(default/busybox-7dff88458-wgww9) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-wgww9"
	E0815 23:22:08.673519       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-wgww9\": pod busybox-7dff88458-wgww9 is already assigned to node \"ha-138000\"" pod="default/busybox-7dff88458-wgww9"
	I0815 23:22:08.673609       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-wgww9" node="ha-138000"
	E0815 23:22:36.177995       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-qpth7\": pod kube-proxy-qpth7 is already assigned to node \"ha-138000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-qpth7" node="ha-138000-m04"
	E0815 23:22:36.178149       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod a343f80b-0fe9-4c88-9782-5fbf9a6170d1(kube-system/kube-proxy-qpth7) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-qpth7"
	E0815 23:22:36.178181       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-qpth7\": pod kube-proxy-qpth7 is already assigned to node \"ha-138000-m04\"" pod="kube-system/kube-proxy-qpth7"
	I0815 23:22:36.178207       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-qpth7" node="ha-138000-m04"
	E0815 23:22:36.181318       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-m887r\": pod kindnet-m887r is already assigned to node \"ha-138000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-m887r" node="ha-138000-m04"
	E0815 23:22:36.181425       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod ba31865b-c712-47a8-9fd8-06420270ac8b(kube-system/kindnet-m887r) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-m887r"
	E0815 23:22:36.181440       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-m887r\": pod kindnet-m887r is already assigned to node \"ha-138000-m04\"" pod="kube-system/kindnet-m887r"
	I0815 23:22:36.181451       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-m887r" node="ha-138000-m04"
	E0815 23:22:36.197728       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-xc8mj\": pod kube-proxy-xc8mj is already assigned to node \"ha-138000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-xc8mj" node="ha-138000-m04"
	E0815 23:22:36.197783       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 661886ed-7ec0-401d-b893-4dd74852e477(kube-system/kube-proxy-xc8mj) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-xc8mj"
	E0815 23:22:36.197797       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-xc8mj\": pod kube-proxy-xc8mj is already assigned to node \"ha-138000-m04\"" pod="kube-system/kube-proxy-xc8mj"
	I0815 23:22:36.197815       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-xc8mj" node="ha-138000-m04"
	I0815 23:24:23.554288       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0815 23:24:23.554620       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0815 23:24:23.554869       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [ac6935271595] <==
	E0815 23:26:28.518254       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0815 23:26:30.367151       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0815 23:26:30.367350       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0815 23:26:32.176181       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.169.0.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0815 23:26:32.176274       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.169.0.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0815 23:26:43.928456       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.169.0.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	E0815 23:26:43.929071       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.169.0.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError"
	W0815 23:26:46.274827       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	E0815 23:26:46.275005       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError"
	W0815 23:26:50.098021       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.169.0.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	E0815 23:26:50.098077       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.169.0.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError"
	W0815 23:26:52.088220       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	E0815 23:26:52.088690       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError"
	W0815 23:26:52.981733       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.169.0.5:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	E0815 23:26:52.981795       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.169.0.5:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError"
	W0815 23:26:54.449050       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:59210->192.169.0.5:8443: read: connection reset by peer
	E0815 23:26:54.449275       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:59210->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0815 23:26:54.449924       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.169.0.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:59206->192.169.0.5:8443: read: connection reset by peer
	E0815 23:26:54.450154       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.169.0.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:59206->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0815 23:26:54.450346       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:59224->192.169.0.5:8443: read: connection reset by peer
	E0815 23:26:54.450494       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:59224->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0815 23:26:54.450863       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.169.0.5:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:59196->192.169.0.5:8443: read: connection reset by peer
	E0815 23:26:54.451005       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.169.0.5:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:59196->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0815 23:26:54.451798       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.169.0.5:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:59190->192.169.0.5:8443: read: connection reset by peer
	E0815 23:26:54.451950       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.169.0.5:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:59190->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	
	
	==> kubelet <==
	Aug 15 23:26:52 ha-138000 kubelet[1576]: E0815 23:26:52.733606    1576 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.169.0.254:8443: connect: no route to host" node="ha-138000"
	Aug 15 23:26:52 ha-138000 kubelet[1576]: E0815 23:26:52.733740    1576 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-138000?timeout=10s\": dial tcp 192.169.0.254:8443: connect: no route to host" interval="7s"
	Aug 15 23:26:52 ha-138000 kubelet[1576]: E0815 23:26:52.733877    1576 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.169.0.254:8443: connect: no route to host" event="&Event{ObjectMeta:{ha-138000.17ec0a7d1e5ef862  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-138000,UID:ha-138000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ha-138000,},FirstTimestamp:2024-08-15 23:24:49.872787554 +0000 UTC m=+0.202964169,LastTimestamp:2024-08-15 23:24:49.872787554 +0000 UTC m=+0.202964169,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-138000,}"
	Aug 15 23:26:54 ha-138000 kubelet[1576]: I0815 23:26:54.223326    1576 scope.go:117] "RemoveContainer" containerID="b9045283b928de777b7f4d15d6de8027f3786562c871977d957e2dd9d2a95b29"
	Aug 15 23:26:54 ha-138000 kubelet[1576]: I0815 23:26:54.224065    1576 scope.go:117] "RemoveContainer" containerID="3067be70dd5080b978b1c43a941f337a0fe62fd963692f6a5910b7e9981d8828"
	Aug 15 23:26:54 ha-138000 kubelet[1576]: E0815 23:26:54.224142    1576 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-ha-138000_kube-system(8df20a622868f60a60f4423e49478fa2)\"" pod="kube-system/kube-apiserver-ha-138000" podUID="8df20a622868f60a60f4423e49478fa2"
	Aug 15 23:26:55 ha-138000 kubelet[1576]: I0815 23:26:55.244291    1576 scope.go:117] "RemoveContainer" containerID="4745e33319a09d9bd5f6d9af75281ffc2c6681d2c3666cbb617dc23792e131ae"
	Aug 15 23:26:55 ha-138000 kubelet[1576]: I0815 23:26:55.245052    1576 scope.go:117] "RemoveContainer" containerID="065b34908ec9822585d14c60d989cc0d488ea0a61e4d398a5b59f18afaf682bd"
	Aug 15 23:26:55 ha-138000 kubelet[1576]: E0815 23:26:55.245163    1576 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-138000_kube-system(ed196a03081880609aebd781f662c0b9)\"" pod="kube-system/kube-controller-manager-ha-138000" podUID="ed196a03081880609aebd781f662c0b9"
	Aug 15 23:26:55 ha-138000 kubelet[1576]: I0815 23:26:55.486843    1576 scope.go:117] "RemoveContainer" containerID="3067be70dd5080b978b1c43a941f337a0fe62fd963692f6a5910b7e9981d8828"
	Aug 15 23:26:55 ha-138000 kubelet[1576]: E0815 23:26:55.487178    1576 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-ha-138000_kube-system(8df20a622868f60a60f4423e49478fa2)\"" pod="kube-system/kube-apiserver-ha-138000" podUID="8df20a622868f60a60f4423e49478fa2"
	Aug 15 23:26:59 ha-138000 kubelet[1576]: I0815 23:26:59.740806    1576 kubelet_node_status.go:72] "Attempting to register node" node="ha-138000"
	Aug 15 23:26:59 ha-138000 kubelet[1576]: I0815 23:26:59.764458    1576 scope.go:117] "RemoveContainer" containerID="065b34908ec9822585d14c60d989cc0d488ea0a61e4d398a5b59f18afaf682bd"
	Aug 15 23:26:59 ha-138000 kubelet[1576]: E0815 23:26:59.764809    1576 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-138000_kube-system(ed196a03081880609aebd781f662c0b9)\"" pod="kube-system/kube-controller-manager-ha-138000" podUID="ed196a03081880609aebd781f662c0b9"
	Aug 15 23:26:59 ha-138000 kubelet[1576]: E0815 23:26:59.937430    1576 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-138000\" not found"
	Aug 15 23:27:00 ha-138000 kubelet[1576]: I0815 23:27:00.860041    1576 scope.go:117] "RemoveContainer" containerID="065b34908ec9822585d14c60d989cc0d488ea0a61e4d398a5b59f18afaf682bd"
	Aug 15 23:27:00 ha-138000 kubelet[1576]: E0815 23:27:00.860271    1576 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-138000_kube-system(ed196a03081880609aebd781f662c0b9)\"" pod="kube-system/kube-controller-manager-ha-138000" podUID="ed196a03081880609aebd781f662c0b9"
	Aug 15 23:27:01 ha-138000 kubelet[1576]: I0815 23:27:01.392106    1576 scope.go:117] "RemoveContainer" containerID="3067be70dd5080b978b1c43a941f337a0fe62fd963692f6a5910b7e9981d8828"
	Aug 15 23:27:01 ha-138000 kubelet[1576]: E0815 23:27:01.392292    1576 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-ha-138000_kube-system(8df20a622868f60a60f4423e49478fa2)\"" pod="kube-system/kube-apiserver-ha-138000" podUID="8df20a622868f60a60f4423e49478fa2"
	Aug 15 23:27:01 ha-138000 kubelet[1576]: E0815 23:27:01.956391    1576 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.169.0.254:8443: connect: no route to host" node="ha-138000"
	Aug 15 23:27:01 ha-138000 kubelet[1576]: W0815 23:27:01.956546    1576 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.169.0.254:8443: connect: no route to host
	Aug 15 23:27:01 ha-138000 kubelet[1576]: E0815 23:27:01.956664    1576 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	Aug 15 23:27:01 ha-138000 kubelet[1576]: E0815 23:27:01.957086    1576 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-138000?timeout=10s\": dial tcp 192.169.0.254:8443: connect: no route to host" interval="7s"
	Aug 15 23:27:05 ha-138000 kubelet[1576]: E0815 23:27:05.022839    1576 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.169.0.254:8443: connect: no route to host" event="&Event{ObjectMeta:{ha-138000.17ec0a7d1e5ef862  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-138000,UID:ha-138000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ha-138000,},FirstTimestamp:2024-08-15 23:24:49.872787554 +0000 UTC m=+0.202964169,LastTimestamp:2024-08-15 23:24:49.872787554 +0000 UTC m=+0.202964169,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-138000,}"
	Aug 15 23:27:05 ha-138000 kubelet[1576]: E0815 23:27:05.023401    1576 event.go:307] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{ha-138000.17ec0a7d1e5ef862  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-138000,UID:ha-138000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ha-138000,},FirstTimestamp:2024-08-15 23:24:49.872787554 +0000 UTC m=+0.202964169,LastTimestamp:2024-08-15 23:24:49.872787554 +0000 UTC m=+0.202964169,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-138000,}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-138000 -n ha-138000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-138000 -n ha-138000: exit status 2 (153.721685ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-138000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (34.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (2.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:413: expected profile "ha-138000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-138000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-138000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperkit\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACoun
t\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-138000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.169.0.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.169.0.5\",\"Port\":8443,\"Ku
bernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.169.0.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.169.0.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.169.0.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\"
:false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP
\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-138000 -n ha-138000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-138000 -n ha-138000: exit status 2 (150.088824ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-138000 logs -n 25: (2.253232146s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-138000 ssh -n                                                                                                             | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n ha-138000-m02 sudo cat                                                                                      | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | /home/docker/cp-test_ha-138000-m03_ha-138000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-138000 cp ha-138000-m03:/home/docker/cp-test.txt                                                                          | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m04:/home/docker/cp-test_ha-138000-m03_ha-138000-m04.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n                                                                                                             | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n ha-138000-m04 sudo cat                                                                                      | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | /home/docker/cp-test_ha-138000-m03_ha-138000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-138000 cp testdata/cp-test.txt                                                                                            | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n                                                                                                             | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-138000 cp ha-138000-m04:/home/docker/cp-test.txt                                                                          | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile1119105544/001/cp-test_ha-138000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n                                                                                                             | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-138000 cp ha-138000-m04:/home/docker/cp-test.txt                                                                          | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000:/home/docker/cp-test_ha-138000-m04_ha-138000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n                                                                                                             | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n ha-138000 sudo cat                                                                                          | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | /home/docker/cp-test_ha-138000-m04_ha-138000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-138000 cp ha-138000-m04:/home/docker/cp-test.txt                                                                          | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m02:/home/docker/cp-test_ha-138000-m04_ha-138000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n                                                                                                             | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n ha-138000-m02 sudo cat                                                                                      | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | /home/docker/cp-test_ha-138000-m04_ha-138000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-138000 cp ha-138000-m04:/home/docker/cp-test.txt                                                                          | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m03:/home/docker/cp-test_ha-138000-m04_ha-138000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n                                                                                                             | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n ha-138000-m03 sudo cat                                                                                      | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | /home/docker/cp-test_ha-138000-m04_ha-138000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-138000 node stop m02 -v=7                                                                                                 | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-138000 node start m02 -v=7                                                                                                | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:24 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-138000 -v=7                                                                                                       | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:24 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | -p ha-138000 -v=7                                                                                                            | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:24 PDT | 15 Aug 24 16:24 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-138000 --wait=true -v=7                                                                                                | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:24 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-138000                                                                                                            | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:26 PDT |                     |
	| node    | ha-138000 node delete m03 -v=7                                                                                               | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:26 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 16:24:31
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 16:24:31.233096    3649 out.go:345] Setting OutFile to fd 1 ...
	I0815 16:24:31.233281    3649 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:24:31.233287    3649 out.go:358] Setting ErrFile to fd 2...
	I0815 16:24:31.233290    3649 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:24:31.233463    3649 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-977/.minikube/bin
	I0815 16:24:31.234892    3649 out.go:352] Setting JSON to false
	I0815 16:24:31.259609    3649 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":1442,"bootTime":1723762829,"procs":422,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0815 16:24:31.259835    3649 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 16:24:31.281220    3649 out.go:177] * [ha-138000] minikube v1.33.1 on Darwin 14.6.1
	I0815 16:24:31.323339    3649 out.go:177]   - MINIKUBE_LOCATION=19452
	I0815 16:24:31.323394    3649 notify.go:220] Checking for updates...
	I0815 16:24:31.366134    3649 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19452-977/kubeconfig
	I0815 16:24:31.387302    3649 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0815 16:24:31.408076    3649 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 16:24:31.429265    3649 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-977/.minikube
	I0815 16:24:31.450282    3649 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 16:24:31.472864    3649 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:24:31.473038    3649 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 16:24:31.473723    3649 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:24:31.473802    3649 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:24:31.483475    3649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52024
	I0815 16:24:31.483866    3649 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:24:31.484264    3649 main.go:141] libmachine: Using API Version  1
	I0815 16:24:31.484274    3649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:24:31.484483    3649 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:24:31.484590    3649 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:24:31.513331    3649 out.go:177] * Using the hyperkit driver based on existing profile
	I0815 16:24:31.555013    3649 start.go:297] selected driver: hyperkit
	I0815 16:24:31.555040    3649 start.go:901] validating driver "hyperkit" against &{Name:ha-138000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-138000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:fals
e efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 16:24:31.555294    3649 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 16:24:31.555482    3649 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 16:24:31.555679    3649 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19452-977/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0815 16:24:31.565322    3649 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0815 16:24:31.570113    3649 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:24:31.570133    3649 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0815 16:24:31.573295    3649 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 16:24:31.573376    3649 cni.go:84] Creating CNI manager for ""
	I0815 16:24:31.573385    3649 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0815 16:24:31.573458    3649 start.go:340] cluster config:
	{Name:ha-138000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-138000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-t
iller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 16:24:31.573576    3649 iso.go:125] acquiring lock: {Name:mkb93fc66b60be15dffa9eb2999a4be8d8d4d218 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 16:24:31.616257    3649 out.go:177] * Starting "ha-138000" primary control-plane node in "ha-138000" cluster
	I0815 16:24:31.636985    3649 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 16:24:31.637060    3649 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19452-977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0815 16:24:31.637085    3649 cache.go:56] Caching tarball of preloaded images
	I0815 16:24:31.637273    3649 preload.go:172] Found /Users/jenkins/minikube-integration/19452-977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0815 16:24:31.637292    3649 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 16:24:31.637487    3649 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/config.json ...
	I0815 16:24:31.638371    3649 start.go:360] acquireMachinesLock for ha-138000: {Name:mkac8034c8a60617271f2754a03dcaf74fc4fef7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 16:24:31.638490    3649 start.go:364] duration metric: took 82.356µs to acquireMachinesLock for "ha-138000"
	I0815 16:24:31.638525    3649 start.go:96] Skipping create...Using existing machine configuration
	I0815 16:24:31.638544    3649 fix.go:54] fixHost starting: 
	I0815 16:24:31.638958    3649 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:24:31.639008    3649 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:24:31.648062    3649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52026
	I0815 16:24:31.648421    3649 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:24:31.648791    3649 main.go:141] libmachine: Using API Version  1
	I0815 16:24:31.648804    3649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:24:31.649022    3649 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:24:31.649142    3649 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:24:31.649278    3649 main.go:141] libmachine: (ha-138000) Calling .GetState
	I0815 16:24:31.649372    3649 main.go:141] libmachine: (ha-138000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:24:31.649446    3649 main.go:141] libmachine: (ha-138000) DBG | hyperkit pid from json: 3071
	I0815 16:24:31.650352    3649 main.go:141] libmachine: (ha-138000) DBG | hyperkit pid 3071 missing from process table
	I0815 16:24:31.650388    3649 fix.go:112] recreateIfNeeded on ha-138000: state=Stopped err=<nil>
	I0815 16:24:31.650403    3649 main.go:141] libmachine: (ha-138000) Calling .DriverName
	W0815 16:24:31.650489    3649 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 16:24:31.698042    3649 out.go:177] * Restarting existing hyperkit VM for "ha-138000" ...
	I0815 16:24:31.718584    3649 main.go:141] libmachine: (ha-138000) Calling .Start
	I0815 16:24:31.718879    3649 main.go:141] libmachine: (ha-138000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:24:31.718940    3649 main.go:141] libmachine: (ha-138000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/hyperkit.pid
	I0815 16:24:31.721002    3649 main.go:141] libmachine: (ha-138000) DBG | hyperkit pid 3071 missing from process table
	I0815 16:24:31.721020    3649 main.go:141] libmachine: (ha-138000) DBG | pid 3071 is in state "Stopped"
	I0815 16:24:31.721044    3649 main.go:141] libmachine: (ha-138000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/hyperkit.pid...
	I0815 16:24:31.721441    3649 main.go:141] libmachine: (ha-138000) DBG | Using UUID bf1b12d0-37a9-4c04-a028-0dd0a6dcd337
	I0815 16:24:31.829003    3649 main.go:141] libmachine: (ha-138000) DBG | Generated MAC 66:4d:cd:54:35:15
	I0815 16:24:31.829029    3649 main.go:141] libmachine: (ha-138000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000
	I0815 16:24:31.829133    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:31 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"bf1b12d0-37a9-4c04-a028-0dd0a6dcd337", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c24e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0815 16:24:31.829169    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:31 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"bf1b12d0-37a9-4c04-a028-0dd0a6dcd337", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c24e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0815 16:24:31.829203    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:31 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "bf1b12d0-37a9-4c04-a028-0dd0a6dcd337", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/ha-138000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/tty,log=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/bzimage,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/initrd,earlyprintk=serial l
oglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000"}
	I0815 16:24:31.829238    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:31 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U bf1b12d0-37a9-4c04-a028-0dd0a6dcd337 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/ha-138000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/tty,log=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/console-ring -f kexec,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/bzimage,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset noresto
re waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000"
	I0815 16:24:31.829247    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:31 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0815 16:24:31.830765    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:31 DEBUG: hyperkit: Pid is 3662
	I0815 16:24:31.831139    3649 main.go:141] libmachine: (ha-138000) DBG | Attempt 0
	I0815 16:24:31.831155    3649 main.go:141] libmachine: (ha-138000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:24:31.831242    3649 main.go:141] libmachine: (ha-138000) DBG | hyperkit pid from json: 3662
	I0815 16:24:31.832840    3649 main.go:141] libmachine: (ha-138000) DBG | Searching for 66:4d:cd:54:35:15 in /var/db/dhcpd_leases ...
	I0815 16:24:31.832917    3649 main.go:141] libmachine: (ha-138000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0815 16:24:31.832934    3649 main.go:141] libmachine: (ha-138000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be8e15}
	I0815 16:24:31.832943    3649 main.go:141] libmachine: (ha-138000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfdf74}
	I0815 16:24:31.832962    3649 main.go:141] libmachine: (ha-138000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfdedc}
	I0815 16:24:31.832970    3649 main.go:141] libmachine: (ha-138000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfde64}
	I0815 16:24:31.832977    3649 main.go:141] libmachine: (ha-138000) DBG | Found match: 66:4d:cd:54:35:15
	I0815 16:24:31.833028    3649 main.go:141] libmachine: (ha-138000) DBG | IP: 192.169.0.5
	I0815 16:24:31.833038    3649 main.go:141] libmachine: (ha-138000) Calling .GetConfigRaw
	I0815 16:24:31.833705    3649 main.go:141] libmachine: (ha-138000) Calling .GetIP
	I0815 16:24:31.833895    3649 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/config.json ...
	I0815 16:24:31.834359    3649 machine.go:93] provisionDockerMachine start ...
	I0815 16:24:31.834370    3649 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:24:31.834509    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:24:31.834611    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:24:31.834733    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:31.834881    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:31.834976    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:24:31.835114    3649 main.go:141] libmachine: Using SSH client type: native
	I0815 16:24:31.835296    3649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x42caea0] 0x42cdc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0815 16:24:31.835304    3649 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 16:24:31.838795    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:31 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0815 16:24:31.891055    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:31 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0815 16:24:31.891732    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0815 16:24:31.891746    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0815 16:24:31.891753    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0815 16:24:31.891763    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0815 16:24:32.275543    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:32 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0815 16:24:32.275556    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:32 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0815 16:24:32.390162    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0815 16:24:32.390181    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0815 16:24:32.390193    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0815 16:24:32.390217    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0815 16:24:32.391060    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:32 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0815 16:24:32.391070    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:32 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0815 16:24:37.953601    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:37 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0815 16:24:37.953741    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:37 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0815 16:24:37.953751    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:37 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0815 16:24:37.980241    3649 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:24:37 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0815 16:24:42.910400    3649 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 16:24:42.910418    3649 main.go:141] libmachine: (ha-138000) Calling .GetMachineName
	I0815 16:24:42.910559    3649 buildroot.go:166] provisioning hostname "ha-138000"
	I0815 16:24:42.910571    3649 main.go:141] libmachine: (ha-138000) Calling .GetMachineName
	I0815 16:24:42.910673    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:24:42.910777    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:24:42.910859    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:42.910959    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:42.911045    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:24:42.911177    3649 main.go:141] libmachine: Using SSH client type: native
	I0815 16:24:42.911343    3649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x42caea0] 0x42cdc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0815 16:24:42.911352    3649 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-138000 && echo "ha-138000" | sudo tee /etc/hostname
	I0815 16:24:42.985179    3649 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-138000
	
	I0815 16:24:42.985199    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:24:42.985338    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:24:42.985446    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:42.985538    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:42.985614    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:24:42.985749    3649 main.go:141] libmachine: Using SSH client type: native
	I0815 16:24:42.985891    3649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x42caea0] 0x42cdc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0815 16:24:42.985905    3649 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-138000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-138000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-138000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 16:24:43.055472    3649 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 16:24:43.055491    3649 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19452-977/.minikube CaCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19452-977/.minikube}
	I0815 16:24:43.055508    3649 buildroot.go:174] setting up certificates
	I0815 16:24:43.055515    3649 provision.go:84] configureAuth start
	I0815 16:24:43.055522    3649 main.go:141] libmachine: (ha-138000) Calling .GetMachineName
	I0815 16:24:43.055669    3649 main.go:141] libmachine: (ha-138000) Calling .GetIP
	I0815 16:24:43.055769    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:24:43.055868    3649 provision.go:143] copyHostCerts
	I0815 16:24:43.055901    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem
	I0815 16:24:43.055963    3649 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem, removing ...
	I0815 16:24:43.055971    3649 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem
	I0815 16:24:43.056106    3649 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem (1082 bytes)
	I0815 16:24:43.056322    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem
	I0815 16:24:43.056353    3649 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem, removing ...
	I0815 16:24:43.056358    3649 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem
	I0815 16:24:43.056432    3649 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem (1123 bytes)
	I0815 16:24:43.056583    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem
	I0815 16:24:43.056611    3649 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem, removing ...
	I0815 16:24:43.056615    3649 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem
	I0815 16:24:43.056681    3649 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem (1679 bytes)
	I0815 16:24:43.056840    3649 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem org=jenkins.ha-138000 san=[127.0.0.1 192.169.0.5 ha-138000 localhost minikube]
	I0815 16:24:43.121501    3649 provision.go:177] copyRemoteCerts
	I0815 16:24:43.121552    3649 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 16:24:43.121568    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:24:43.121697    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:24:43.121782    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:43.121880    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:24:43.121971    3649 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/id_rsa Username:docker}
	I0815 16:24:43.165154    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0815 16:24:43.165236    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 16:24:43.200018    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0815 16:24:43.200092    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0815 16:24:43.220757    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0815 16:24:43.220829    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 16:24:43.240667    3649 provision.go:87] duration metric: took 185.141163ms to configureAuth
	I0815 16:24:43.240680    3649 buildroot.go:189] setting minikube options for container-runtime
	I0815 16:24:43.240849    3649 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:24:43.240863    3649 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:24:43.240998    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:24:43.241100    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:24:43.241183    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:43.241273    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:43.241367    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:24:43.241484    3649 main.go:141] libmachine: Using SSH client type: native
	I0815 16:24:43.241652    3649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x42caea0] 0x42cdc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0815 16:24:43.241660    3649 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0815 16:24:43.302884    3649 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0815 16:24:43.302897    3649 buildroot.go:70] root file system type: tmpfs
	I0815 16:24:43.302965    3649 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0815 16:24:43.302977    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:24:43.303108    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:24:43.303198    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:43.303278    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:43.303364    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:24:43.303495    3649 main.go:141] libmachine: Using SSH client type: native
	I0815 16:24:43.303638    3649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x42caea0] 0x42cdc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0815 16:24:43.303683    3649 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0815 16:24:43.378222    3649 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0815 16:24:43.378246    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:24:43.378382    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:24:43.378461    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:43.378563    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:43.378649    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:24:43.378787    3649 main.go:141] libmachine: Using SSH client type: native
	I0815 16:24:43.378932    3649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x42caea0] 0x42cdc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0815 16:24:43.378946    3649 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0815 16:24:45.080555    3649 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0815 16:24:45.080572    3649 machine.go:96] duration metric: took 13.246248166s to provisionDockerMachine
	I0815 16:24:45.080585    3649 start.go:293] postStartSetup for "ha-138000" (driver="hyperkit")
	I0815 16:24:45.080595    3649 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 16:24:45.080616    3649 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:24:45.080791    3649 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 16:24:45.080805    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:24:45.080908    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:24:45.080996    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:45.081081    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:24:45.081171    3649 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/id_rsa Username:docker}
	I0815 16:24:45.119742    3649 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 16:24:45.122978    3649 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 16:24:45.122994    3649 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19452-977/.minikube/addons for local assets ...
	I0815 16:24:45.123095    3649 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19452-977/.minikube/files for local assets ...
	I0815 16:24:45.123274    3649 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> 14982.pem in /etc/ssl/certs
	I0815 16:24:45.123280    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> /etc/ssl/certs/14982.pem
	I0815 16:24:45.123473    3649 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 16:24:45.130896    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem --> /etc/ssl/certs/14982.pem (1708 bytes)
	I0815 16:24:45.150554    3649 start.go:296] duration metric: took 69.960327ms for postStartSetup
	I0815 16:24:45.150578    3649 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:24:45.150756    3649 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0815 16:24:45.150769    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:24:45.150849    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:24:45.150943    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:45.151041    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:24:45.151122    3649 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/id_rsa Username:docker}
	I0815 16:24:45.187860    3649 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0815 16:24:45.187918    3649 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0815 16:24:45.240522    3649 fix.go:56] duration metric: took 13.602028125s for fixHost
	I0815 16:24:45.240543    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:24:45.240694    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:24:45.240782    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:45.240866    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:45.240953    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:24:45.241079    3649 main.go:141] libmachine: Using SSH client type: native
	I0815 16:24:45.241222    3649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x42caea0] 0x42cdc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0815 16:24:45.241230    3649 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 16:24:45.308498    3649 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723764285.546205797
	
	I0815 16:24:45.308509    3649 fix.go:216] guest clock: 1723764285.546205797
	I0815 16:24:45.308515    3649 fix.go:229] Guest: 2024-08-15 16:24:45.546205797 -0700 PDT Remote: 2024-08-15 16:24:45.240533 -0700 PDT m=+14.043250910 (delta=305.672797ms)
	I0815 16:24:45.308536    3649 fix.go:200] guest clock delta is within tolerance: 305.672797ms
	I0815 16:24:45.308540    3649 start.go:83] releasing machines lock for "ha-138000", held for 13.670085598s
	I0815 16:24:45.308562    3649 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:24:45.308691    3649 main.go:141] libmachine: (ha-138000) Calling .GetIP
	I0815 16:24:45.308815    3649 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:24:45.309125    3649 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:24:45.309228    3649 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:24:45.309333    3649 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 16:24:45.309348    3649 ssh_runner.go:195] Run: cat /version.json
	I0815 16:24:45.309359    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:24:45.309374    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:24:45.309454    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:24:45.309481    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:24:45.309570    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:45.309586    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:24:45.309666    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:24:45.309673    3649 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:24:45.309753    3649 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/id_rsa Username:docker}
	I0815 16:24:45.309764    3649 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/id_rsa Username:docker}
	I0815 16:24:45.353596    3649 ssh_runner.go:195] Run: systemctl --version
	I0815 16:24:45.358729    3649 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 16:24:45.412525    3649 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 16:24:45.412627    3649 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 16:24:45.428066    3649 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 16:24:45.428077    3649 start.go:495] detecting cgroup driver to use...
	I0815 16:24:45.428183    3649 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 16:24:45.444602    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0815 16:24:45.453384    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0815 16:24:45.462134    3649 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0815 16:24:45.462180    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0815 16:24:45.470781    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 16:24:45.479385    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0815 16:24:45.487960    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 16:24:45.496691    3649 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 16:24:45.505669    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0815 16:24:45.514277    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0815 16:24:45.522851    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0815 16:24:45.531584    3649 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 16:24:45.539529    3649 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 16:24:45.547375    3649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:24:45.642699    3649 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0815 16:24:45.657803    3649 start.go:495] detecting cgroup driver to use...
	I0815 16:24:45.657881    3649 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0815 16:24:45.669244    3649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 16:24:45.680074    3649 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 16:24:45.692718    3649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 16:24:45.703066    3649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0815 16:24:45.713234    3649 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0815 16:24:45.735236    3649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0815 16:24:45.745677    3649 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 16:24:45.760852    3649 ssh_runner.go:195] Run: which cri-dockerd
	I0815 16:24:45.763929    3649 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0815 16:24:45.771021    3649 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0815 16:24:45.784172    3649 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0815 16:24:45.887215    3649 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0815 16:24:45.995634    3649 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0815 16:24:45.995716    3649 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0815 16:24:46.010389    3649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:24:46.126522    3649 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0815 16:24:48.464685    3649 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.338152009s)
	I0815 16:24:48.464761    3649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0815 16:24:48.475831    3649 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0815 16:24:48.490512    3649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0815 16:24:48.501692    3649 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0815 16:24:48.596754    3649 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0815 16:24:48.705379    3649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:24:48.807279    3649 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0815 16:24:48.821232    3649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0815 16:24:48.832145    3649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:24:48.931537    3649 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0815 16:24:48.994946    3649 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0815 16:24:48.995028    3649 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0815 16:24:48.999199    3649 start.go:563] Will wait 60s for crictl version
	I0815 16:24:48.999246    3649 ssh_runner.go:195] Run: which crictl
	I0815 16:24:49.002242    3649 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 16:24:49.031023    3649 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0815 16:24:49.031095    3649 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0815 16:24:49.049391    3649 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0815 16:24:49.110204    3649 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0815 16:24:49.110253    3649 main.go:141] libmachine: (ha-138000) Calling .GetIP
	I0815 16:24:49.110630    3649 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0815 16:24:49.114885    3649 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 16:24:49.125317    3649 kubeadm.go:883] updating cluster {Name:ha-138000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:ha-138000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false f
reshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 16:24:49.125409    3649 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 16:24:49.125461    3649 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0815 16:24:49.138389    3649 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0815 16:24:49.138400    3649 docker.go:615] Images already preloaded, skipping extraction
	I0815 16:24:49.138469    3649 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0815 16:24:49.152217    3649 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0815 16:24:49.152236    3649 cache_images.go:84] Images are preloaded, skipping loading
	I0815 16:24:49.152245    3649 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.0 docker true true} ...
	I0815 16:24:49.152316    3649 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-138000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-138000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 16:24:49.152387    3649 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0815 16:24:49.188207    3649 cni.go:84] Creating CNI manager for ""
	I0815 16:24:49.188219    3649 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0815 16:24:49.188233    3649 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 16:24:49.188247    3649 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-138000 NodeName:ha-138000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 16:24:49.188328    3649 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-138000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 16:24:49.188340    3649 kube-vip.go:115] generating kube-vip config ...
	I0815 16:24:49.188395    3649 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0815 16:24:49.201717    3649 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0815 16:24:49.201810    3649 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0815 16:24:49.201860    3649 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 16:24:49.210773    3649 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 16:24:49.210821    3649 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0815 16:24:49.218705    3649 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0815 16:24:49.232092    3649 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 16:24:49.245488    3649 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0815 16:24:49.259182    3649 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0815 16:24:49.272667    3649 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0815 16:24:49.275463    3649 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 16:24:49.285341    3649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:24:49.379165    3649 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 16:24:49.393690    3649 certs.go:68] Setting up /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000 for IP: 192.169.0.5
	I0815 16:24:49.393701    3649 certs.go:194] generating shared ca certs ...
	I0815 16:24:49.393711    3649 certs.go:226] acquiring lock for ca certs: {Name:mka1c88019f6064d2983ac988db71a67aeb65696 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:24:49.393886    3649 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.key
	I0815 16:24:49.393940    3649 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.key
	I0815 16:24:49.393952    3649 certs.go:256] generating profile certs ...
	I0815 16:24:49.394054    3649 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/client.key
	I0815 16:24:49.394074    3649 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.key.7af4c91a
	I0815 16:24:49.394091    3649 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.crt.7af4c91a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.7 192.169.0.254]
	I0815 16:24:49.771714    3649 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.crt.7af4c91a ...
	I0815 16:24:49.771738    3649 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.crt.7af4c91a: {Name:mkfdf96fafb98f174dadc5b6379869463c2a6ca3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:24:49.772085    3649 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.key.7af4c91a ...
	I0815 16:24:49.772094    3649 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.key.7af4c91a: {Name:mk0c2b233ae670508e502baf145f82fc5c8af979 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:24:49.772311    3649 certs.go:381] copying /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.crt.7af4c91a -> /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.crt
	I0815 16:24:49.772506    3649 certs.go:385] copying /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.key.7af4c91a -> /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.key
	I0815 16:24:49.772728    3649 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.key
	I0815 16:24:49.772737    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0815 16:24:49.772760    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0815 16:24:49.772779    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0815 16:24:49.772798    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0815 16:24:49.772818    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0815 16:24:49.772836    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0815 16:24:49.772855    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0815 16:24:49.772873    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0815 16:24:49.772972    3649 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498.pem (1338 bytes)
	W0815 16:24:49.773012    3649 certs.go:480] ignoring /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498_empty.pem, impossibly tiny 0 bytes
	I0815 16:24:49.773021    3649 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 16:24:49.773066    3649 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem (1082 bytes)
	I0815 16:24:49.773106    3649 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem (1123 bytes)
	I0815 16:24:49.773135    3649 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem (1679 bytes)
	I0815 16:24:49.773201    3649 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem (1708 bytes)
	I0815 16:24:49.773235    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> /usr/share/ca-certificates/14982.pem
	I0815 16:24:49.773257    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:24:49.773276    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498.pem -> /usr/share/ca-certificates/1498.pem
	I0815 16:24:49.773761    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 16:24:49.799905    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 16:24:49.819857    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 16:24:49.839446    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 16:24:49.859479    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0815 16:24:49.878979    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 16:24:49.898857    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 16:24:49.918488    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 16:24:49.938289    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem --> /usr/share/ca-certificates/14982.pem (1708 bytes)
	I0815 16:24:49.958067    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 16:24:49.977508    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498.pem --> /usr/share/ca-certificates/1498.pem (1338 bytes)
	I0815 16:24:49.997111    3649 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 16:24:50.010415    3649 ssh_runner.go:195] Run: openssl version
	I0815 16:24:50.014564    3649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14982.pem && ln -fs /usr/share/ca-certificates/14982.pem /etc/ssl/certs/14982.pem"
	I0815 16:24:50.022762    3649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14982.pem
	I0815 16:24:50.025974    3649 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 23:14 /usr/share/ca-certificates/14982.pem
	I0815 16:24:50.026012    3649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14982.pem
	I0815 16:24:50.030247    3649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14982.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 16:24:50.038688    3649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 16:24:50.046935    3649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:24:50.050205    3649 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:05 /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:24:50.050240    3649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:24:50.054437    3649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 16:24:50.062668    3649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1498.pem && ln -fs /usr/share/ca-certificates/1498.pem /etc/ssl/certs/1498.pem"
	I0815 16:24:50.070835    3649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1498.pem
	I0815 16:24:50.074144    3649 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 23:14 /usr/share/ca-certificates/1498.pem
	I0815 16:24:50.074179    3649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1498.pem
	I0815 16:24:50.078407    3649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1498.pem /etc/ssl/certs/51391683.0"
	I0815 16:24:50.087458    3649 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 16:24:50.090800    3649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 16:24:50.095300    3649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 16:24:50.099454    3649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 16:24:50.104181    3649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 16:24:50.108451    3649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 16:24:50.112679    3649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 16:24:50.116963    3649 kubeadm.go:392] StartCluster: {Name:ha-138000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:ha-138000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 16:24:50.117082    3649 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0815 16:24:50.130554    3649 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 16:24:50.137992    3649 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 16:24:50.138004    3649 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 16:24:50.138048    3649 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 16:24:50.145558    3649 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 16:24:50.145859    3649 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-138000" does not appear in /Users/jenkins/minikube-integration/19452-977/kubeconfig
	I0815 16:24:50.145940    3649 kubeconfig.go:62] /Users/jenkins/minikube-integration/19452-977/kubeconfig needs updating (will repair): [kubeconfig missing "ha-138000" cluster setting kubeconfig missing "ha-138000" context setting]
	I0815 16:24:50.146137    3649 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-977/kubeconfig: {Name:mk0f04b0199f84dc20eb294d5b790451b41e43fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:24:50.146558    3649 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19452-977/kubeconfig
	I0815 16:24:50.146752    3649 kapi.go:59] client config for ha-138000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/client.key", CAFile:"/Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Use
rAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x5983f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0815 16:24:50.147060    3649 cert_rotation.go:140] Starting client certificate rotation controller
	I0815 16:24:50.147235    3649 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 16:24:50.154308    3649 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0815 16:24:50.154320    3649 kubeadm.go:597] duration metric: took 16.312125ms to restartPrimaryControlPlane
	I0815 16:24:50.154325    3649 kubeadm.go:394] duration metric: took 37.367941ms to StartCluster
	I0815 16:24:50.154333    3649 settings.go:142] acquiring lock: {Name:mk694dad19d37394fa6b13c51a7dc54b62e97c12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:24:50.154408    3649 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19452-977/kubeconfig
	I0815 16:24:50.154767    3649 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-977/kubeconfig: {Name:mk0f04b0199f84dc20eb294d5b790451b41e43fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:24:50.154992    3649 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 16:24:50.155005    3649 start.go:241] waiting for startup goroutines ...
	I0815 16:24:50.155016    3649 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 16:24:50.155148    3649 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:24:50.196433    3649 out.go:177] * Enabled addons: 
	I0815 16:24:50.217474    3649 addons.go:510] duration metric: took 62.454726ms for enable addons: enabled=[]
	I0815 16:24:50.217512    3649 start.go:246] waiting for cluster config update ...
	I0815 16:24:50.217524    3649 start.go:255] writing updated cluster config ...
	I0815 16:24:50.239613    3649 out.go:201] 
	I0815 16:24:50.260810    3649 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:24:50.260937    3649 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/config.json ...
	I0815 16:24:50.282712    3649 out.go:177] * Starting "ha-138000-m02" control-plane node in "ha-138000" cluster
	I0815 16:24:50.324521    3649 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 16:24:50.324584    3649 cache.go:56] Caching tarball of preloaded images
	I0815 16:24:50.324754    3649 preload.go:172] Found /Users/jenkins/minikube-integration/19452-977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0815 16:24:50.324772    3649 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 16:24:50.324901    3649 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/config.json ...
	I0815 16:24:50.325802    3649 start.go:360] acquireMachinesLock for ha-138000-m02: {Name:mkac8034c8a60617271f2754a03dcaf74fc4fef7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 16:24:50.325911    3649 start.go:364] duration metric: took 84.439µs to acquireMachinesLock for "ha-138000-m02"
	I0815 16:24:50.325938    3649 start.go:96] Skipping create...Using existing machine configuration
	I0815 16:24:50.325946    3649 fix.go:54] fixHost starting: m02
	I0815 16:24:50.326424    3649 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:24:50.326451    3649 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:24:50.335682    3649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52048
	I0815 16:24:50.336051    3649 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:24:50.336443    3649 main.go:141] libmachine: Using API Version  1
	I0815 16:24:50.336459    3649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:24:50.336675    3649 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:24:50.336791    3649 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:24:50.336888    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetState
	I0815 16:24:50.336961    3649 main.go:141] libmachine: (ha-138000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:24:50.337044    3649 main.go:141] libmachine: (ha-138000-m02) DBG | hyperkit pid from json: 3600
	I0815 16:24:50.337930    3649 main.go:141] libmachine: (ha-138000-m02) DBG | hyperkit pid 3600 missing from process table
	I0815 16:24:50.337962    3649 fix.go:112] recreateIfNeeded on ha-138000-m02: state=Stopped err=<nil>
	I0815 16:24:50.337972    3649 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	W0815 16:24:50.338053    3649 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 16:24:50.379676    3649 out.go:177] * Restarting existing hyperkit VM for "ha-138000-m02" ...
	I0815 16:24:50.400415    3649 main.go:141] libmachine: (ha-138000-m02) Calling .Start
	I0815 16:24:50.400691    3649 main.go:141] libmachine: (ha-138000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:24:50.400747    3649 main.go:141] libmachine: (ha-138000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/hyperkit.pid
	I0815 16:24:50.402488    3649 main.go:141] libmachine: (ha-138000-m02) DBG | hyperkit pid 3600 missing from process table
	I0815 16:24:50.402502    3649 main.go:141] libmachine: (ha-138000-m02) DBG | pid 3600 is in state "Stopped"
	I0815 16:24:50.402518    3649 main.go:141] libmachine: (ha-138000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/hyperkit.pid...
	I0815 16:24:50.402857    3649 main.go:141] libmachine: (ha-138000-m02) DBG | Using UUID 4cff9b5a-9fe3-4215-9139-05f05b79bce3
	I0815 16:24:50.432166    3649 main.go:141] libmachine: (ha-138000-m02) DBG | Generated MAC 9a:c2:e9:d7:1c:58
	I0815 16:24:50.432194    3649 main.go:141] libmachine: (ha-138000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000
	I0815 16:24:50.432283    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"4cff9b5a-9fe3-4215-9139-05f05b79bce3", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003b06c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0815 16:24:50.432316    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"4cff9b5a-9fe3-4215-9139-05f05b79bce3", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003b06c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0815 16:24:50.432360    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "4cff9b5a-9fe3-4215-9139-05f05b79bce3", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/ha-138000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/tty,log=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/bzimage,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-13
8000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000"}
	I0815 16:24:50.432400    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 4cff9b5a-9fe3-4215-9139-05f05b79bce3 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/ha-138000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/tty,log=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/bzimage,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 co
nsole=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000"
	I0815 16:24:50.432410    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0815 16:24:50.433800    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 DEBUG: hyperkit: Pid is 3670
	I0815 16:24:50.434270    3649 main.go:141] libmachine: (ha-138000-m02) DBG | Attempt 0
	I0815 16:24:50.434284    3649 main.go:141] libmachine: (ha-138000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:24:50.434361    3649 main.go:141] libmachine: (ha-138000-m02) DBG | hyperkit pid from json: 3670
	I0815 16:24:50.436313    3649 main.go:141] libmachine: (ha-138000-m02) DBG | Searching for 9a:c2:e9:d7:1c:58 in /var/db/dhcpd_leases ...
	I0815 16:24:50.436365    3649 main.go:141] libmachine: (ha-138000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0815 16:24:50.436381    3649 main.go:141] libmachine: (ha-138000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfdfb9}
	I0815 16:24:50.436395    3649 main.go:141] libmachine: (ha-138000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be8e15}
	I0815 16:24:50.436408    3649 main.go:141] libmachine: (ha-138000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfdf74}
	I0815 16:24:50.436429    3649 main.go:141] libmachine: (ha-138000-m02) DBG | Found match: 9a:c2:e9:d7:1c:58
	I0815 16:24:50.436463    3649 main.go:141] libmachine: (ha-138000-m02) DBG | IP: 192.169.0.6
	I0815 16:24:50.436476    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetConfigRaw
	I0815 16:24:50.437131    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetIP
	I0815 16:24:50.437308    3649 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/config.json ...
	I0815 16:24:50.437758    3649 machine.go:93] provisionDockerMachine start ...
	I0815 16:24:50.437768    3649 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:24:50.437887    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:24:50.437997    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:24:50.438094    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:24:50.438199    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:24:50.438287    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:24:50.438398    3649 main.go:141] libmachine: Using SSH client type: native
	I0815 16:24:50.438546    3649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x42caea0] 0x42cdc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0815 16:24:50.438554    3649 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 16:24:50.441514    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0815 16:24:50.450166    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0815 16:24:50.451006    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0815 16:24:50.451024    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0815 16:24:50.451053    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0815 16:24:50.451081    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0815 16:24:50.836828    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0815 16:24:50.836848    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0815 16:24:50.951307    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0815 16:24:50.951325    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0815 16:24:50.951354    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0815 16:24:50.951377    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0815 16:24:50.952254    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0815 16:24:50.952268    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:50 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0815 16:24:56.551926    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:56 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0815 16:24:56.551945    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:56 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0815 16:24:56.551957    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:56 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0815 16:24:56.576187    3649 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:24:56 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0815 16:25:25.506687    3649 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 16:25:25.506701    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetMachineName
	I0815 16:25:25.506833    3649 buildroot.go:166] provisioning hostname "ha-138000-m02"
	I0815 16:25:25.506845    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetMachineName
	I0815 16:25:25.506942    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:25:25.507027    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:25:25.507110    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:25.507196    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:25.507274    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:25:25.507413    3649 main.go:141] libmachine: Using SSH client type: native
	I0815 16:25:25.507576    3649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x42caea0] 0x42cdc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0815 16:25:25.507586    3649 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-138000-m02 && echo "ha-138000-m02" | sudo tee /etc/hostname
	I0815 16:25:25.578727    3649 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-138000-m02
	
	I0815 16:25:25.578742    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:25:25.578877    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:25:25.578967    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:25.579045    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:25.579129    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:25:25.579269    3649 main.go:141] libmachine: Using SSH client type: native
	I0815 16:25:25.579419    3649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x42caea0] 0x42cdc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0815 16:25:25.579432    3649 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-138000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-138000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-138000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 16:25:25.645270    3649 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 16:25:25.645285    3649 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19452-977/.minikube CaCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19452-977/.minikube}
	I0815 16:25:25.645301    3649 buildroot.go:174] setting up certificates
	I0815 16:25:25.645307    3649 provision.go:84] configureAuth start
	I0815 16:25:25.645342    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetMachineName
	I0815 16:25:25.645472    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetIP
	I0815 16:25:25.645569    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:25:25.645659    3649 provision.go:143] copyHostCerts
	I0815 16:25:25.645686    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem
	I0815 16:25:25.645746    3649 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem, removing ...
	I0815 16:25:25.645752    3649 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem
	I0815 16:25:25.645910    3649 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem (1082 bytes)
	I0815 16:25:25.646118    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem
	I0815 16:25:25.646164    3649 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem, removing ...
	I0815 16:25:25.646169    3649 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem
	I0815 16:25:25.646253    3649 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem (1123 bytes)
	I0815 16:25:25.646420    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem
	I0815 16:25:25.646496    3649 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem, removing ...
	I0815 16:25:25.646504    3649 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem
	I0815 16:25:25.646598    3649 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem (1679 bytes)
	I0815 16:25:25.646765    3649 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem org=jenkins.ha-138000-m02 san=[127.0.0.1 192.169.0.6 ha-138000-m02 localhost minikube]
	I0815 16:25:25.825658    3649 provision.go:177] copyRemoteCerts
	I0815 16:25:25.825707    3649 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 16:25:25.825722    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:25:25.825863    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:25:25.825953    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:25.826053    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:25:25.826140    3649 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/id_rsa Username:docker}
	I0815 16:25:25.862344    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0815 16:25:25.862417    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 16:25:25.882572    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0815 16:25:25.882639    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 16:25:25.902404    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0815 16:25:25.902470    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0815 16:25:25.922317    3649 provision.go:87] duration metric: took 277.0023ms to configureAuth
	I0815 16:25:25.922332    3649 buildroot.go:189] setting minikube options for container-runtime
	I0815 16:25:25.922512    3649 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:25:25.922526    3649 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:25:25.922660    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:25:25.922753    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:25:25.922847    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:25.922931    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:25.923029    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:25:25.923140    3649 main.go:141] libmachine: Using SSH client type: native
	I0815 16:25:25.923269    3649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x42caea0] 0x42cdc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0815 16:25:25.923277    3649 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0815 16:25:25.984805    3649 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0815 16:25:25.984816    3649 buildroot.go:70] root file system type: tmpfs
	I0815 16:25:25.984938    3649 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0815 16:25:25.984949    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:25:25.985083    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:25:25.985169    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:25.985249    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:25.985329    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:25:25.985450    3649 main.go:141] libmachine: Using SSH client type: native
	I0815 16:25:25.985607    3649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x42caea0] 0x42cdc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0815 16:25:25.985653    3649 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0815 16:25:26.056607    3649 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0815 16:25:26.056625    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:25:26.056761    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:25:26.056863    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:26.056957    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:26.057043    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:25:26.057179    3649 main.go:141] libmachine: Using SSH client type: native
	I0815 16:25:26.057326    3649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x42caea0] 0x42cdc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0815 16:25:26.057338    3649 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0815 16:25:27.732286    3649 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0815 16:25:27.732301    3649 machine.go:96] duration metric: took 37.294661422s to provisionDockerMachine
	I0815 16:25:27.732309    3649 start.go:293] postStartSetup for "ha-138000-m02" (driver="hyperkit")
	I0815 16:25:27.732317    3649 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 16:25:27.732327    3649 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:25:27.732516    3649 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 16:25:27.732528    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:25:27.732625    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:25:27.732731    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:27.732809    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:25:27.732896    3649 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/id_rsa Username:docker}
	I0815 16:25:27.769243    3649 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 16:25:27.772355    3649 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 16:25:27.772366    3649 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19452-977/.minikube/addons for local assets ...
	I0815 16:25:27.772467    3649 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19452-977/.minikube/files for local assets ...
	I0815 16:25:27.772656    3649 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> 14982.pem in /etc/ssl/certs
	I0815 16:25:27.772668    3649 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> /etc/ssl/certs/14982.pem
	I0815 16:25:27.772873    3649 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 16:25:27.780868    3649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem --> /etc/ssl/certs/14982.pem (1708 bytes)
	I0815 16:25:27.799647    3649 start.go:296] duration metric: took 67.329668ms for postStartSetup
	I0815 16:25:27.799668    3649 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:25:27.799829    3649 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0815 16:25:27.799842    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:25:27.799928    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:25:27.800000    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:27.800074    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:25:27.800149    3649 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/id_rsa Username:docker}
	I0815 16:25:27.837218    3649 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0815 16:25:27.837277    3649 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0815 16:25:27.871532    3649 fix.go:56] duration metric: took 37.545710837s for fixHost
	I0815 16:25:27.871559    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:25:27.871714    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:25:27.871806    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:27.871884    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:27.871974    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:25:27.872101    3649 main.go:141] libmachine: Using SSH client type: native
	I0815 16:25:27.872250    3649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x42caea0] 0x42cdc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0815 16:25:27.872257    3649 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 16:25:27.932451    3649 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723764328.172025914
	
	I0815 16:25:27.932464    3649 fix.go:216] guest clock: 1723764328.172025914
	I0815 16:25:27.932470    3649 fix.go:229] Guest: 2024-08-15 16:25:28.172025914 -0700 PDT Remote: 2024-08-15 16:25:27.871549 -0700 PDT m=+56.674410917 (delta=300.476914ms)
	I0815 16:25:27.932480    3649 fix.go:200] guest clock delta is within tolerance: 300.476914ms
	I0815 16:25:27.932484    3649 start.go:83] releasing machines lock for "ha-138000-m02", held for 37.606689063s
	I0815 16:25:27.932502    3649 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:25:27.932640    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetIP
	I0815 16:25:27.955698    3649 out.go:177] * Found network options:
	I0815 16:25:27.976977    3649 out.go:177]   - NO_PROXY=192.169.0.5
	W0815 16:25:27.997880    3649 proxy.go:119] fail to check proxy env: Error ip not in block
	I0815 16:25:27.997916    3649 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:25:27.998743    3649 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:25:27.998959    3649 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:25:27.999062    3649 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 16:25:27.999103    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	W0815 16:25:27.999149    3649 proxy.go:119] fail to check proxy env: Error ip not in block
	I0815 16:25:27.999255    3649 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0815 16:25:27.999276    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:25:27.999310    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:25:27.999538    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:25:27.999567    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:27.999751    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:25:27.999778    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:25:27.999890    3649 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:25:27.999915    3649 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/id_rsa Username:docker}
	I0815 16:25:28.000017    3649 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/id_rsa Username:docker}
	W0815 16:25:28.032774    3649 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 16:25:28.032832    3649 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 16:25:28.085200    3649 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 16:25:28.085222    3649 start.go:495] detecting cgroup driver to use...
	I0815 16:25:28.085337    3649 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 16:25:28.101256    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0815 16:25:28.110461    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0815 16:25:28.119610    3649 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0815 16:25:28.119671    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0815 16:25:28.128841    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 16:25:28.137598    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0815 16:25:28.146542    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 16:25:28.155343    3649 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 16:25:28.164400    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0815 16:25:28.173324    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0815 16:25:28.182447    3649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0815 16:25:28.191439    3649 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 16:25:28.199534    3649 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 16:25:28.207385    3649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:25:28.307256    3649 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0815 16:25:28.326701    3649 start.go:495] detecting cgroup driver to use...
	I0815 16:25:28.326772    3649 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0815 16:25:28.345963    3649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 16:25:28.361865    3649 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 16:25:28.380032    3649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 16:25:28.392583    3649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0815 16:25:28.403338    3649 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0815 16:25:28.425534    3649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0815 16:25:28.435952    3649 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 16:25:28.450826    3649 ssh_runner.go:195] Run: which cri-dockerd
	I0815 16:25:28.453880    3649 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0815 16:25:28.461213    3649 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0815 16:25:28.474603    3649 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0815 16:25:28.569552    3649 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0815 16:25:28.669486    3649 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0815 16:25:28.669508    3649 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0815 16:25:28.684315    3649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:25:28.789048    3649 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0815 16:26:29.810459    3649 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.021600349s)
	I0815 16:26:29.810528    3649 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0815 16:26:29.846420    3649 out.go:201] 
	W0815 16:26:29.868048    3649 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 15 23:25:25 ha-138000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 15 23:25:25 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:25.509065819Z" level=info msg="Starting up"
	Aug 15 23:25:25 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:25.509592997Z" level=info msg="containerd not running, starting managed containerd"
	Aug 15 23:25:25 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:25.510095236Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=519
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.527964893Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.542679991Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.542751629Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.542813012Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.542847466Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.542971116Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.543022892Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.543226251Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.543273769Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.543307918Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.543342764Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.543453732Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.543640009Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.545258649Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.545308637Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.545445977Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.545492906Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.545600399Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.545650460Z" level=info msg="metadata content store policy set" policy=shared
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.547717368Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.547830207Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.547884234Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.548013412Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.548060318Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.548127353Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.548391092Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.552607490Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.552725748Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.552840021Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.552885041Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.552918051Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.552984961Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553030860Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553064737Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553096185Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553126522Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553162873Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553202352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553233572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553266178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553297774Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553327631Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553357374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553386246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553418283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553450098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553484562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553517795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553547301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553576466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553607695Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553650178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553684928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553713941Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553789004Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553836418Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553870209Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.553907631Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.554030910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.554116351Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.554162242Z" level=info msg="NRI interface is disabled by configuration."
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.554425646Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.554560798Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.554647146Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 15 23:25:25 ha-138000-m02 dockerd[519]: time="2024-08-15T23:25:25.554690019Z" level=info msg="containerd successfully booted in 0.027466s"
	Aug 15 23:25:26 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:26.539092962Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 15 23:25:26 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:26.579801466Z" level=info msg="Loading containers: start."
	Aug 15 23:25:26 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:26.753629817Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 15 23:25:27 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:27.897778336Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 15 23:25:27 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:27.941918967Z" level=info msg="Loading containers: done."
	Aug 15 23:25:27 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:27.949162882Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	Aug 15 23:25:27 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:27.949300191Z" level=info msg="Daemon has completed initialization"
	Aug 15 23:25:27 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:27.970294492Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 15 23:25:27 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:27.970499353Z" level=info msg="API listen on [::]:2376"
	Aug 15 23:25:27 ha-138000-m02 systemd[1]: Started Docker Application Container Engine.
	Aug 15 23:25:29 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:29.040016751Z" level=info msg="Processing signal 'terminated'"
	Aug 15 23:25:29 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:29.040919337Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 15 23:25:29 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:29.041235066Z" level=info msg="Daemon shutdown complete"
	Aug 15 23:25:29 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:29.041343453Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 15 23:25:29 ha-138000-m02 dockerd[512]: time="2024-08-15T23:25:29.041349896Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 15 23:25:29 ha-138000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Aug 15 23:25:30 ha-138000-m02 systemd[1]: docker.service: Deactivated successfully.
	Aug 15 23:25:30 ha-138000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Aug 15 23:25:30 ha-138000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 15 23:25:30 ha-138000-m02 dockerd[1088]: time="2024-08-15T23:25:30.078915638Z" level=info msg="Starting up"
	Aug 15 23:26:30 ha-138000-m02 dockerd[1088]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 15 23:26:30 ha-138000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 15 23:26:30 ha-138000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 15 23:26:30 ha-138000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0815 16:26:29.868131    3649 out.go:270] * 
	W0815 16:26:29.869562    3649 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 16:26:29.930973    3649 out.go:201] 
	
	
	==> Docker <==
	Aug 15 23:25:38 ha-138000 dockerd[1164]: time="2024-08-15T23:25:38.961334482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:25:59 ha-138000 dockerd[1157]: time="2024-08-15T23:25:59.760572159Z" level=info msg="ignoring event" container=4745e33319a09d9bd5f6d9af75281ffc2c6681d2c3666cbb617dc23792e131ae module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 15 23:25:59 ha-138000 dockerd[1164]: time="2024-08-15T23:25:59.761266003Z" level=info msg="shim disconnected" id=4745e33319a09d9bd5f6d9af75281ffc2c6681d2c3666cbb617dc23792e131ae namespace=moby
	Aug 15 23:25:59 ha-138000 dockerd[1164]: time="2024-08-15T23:25:59.761315524Z" level=warning msg="cleaning up after shim disconnected" id=4745e33319a09d9bd5f6d9af75281ffc2c6681d2c3666cbb617dc23792e131ae namespace=moby
	Aug 15 23:25:59 ha-138000 dockerd[1164]: time="2024-08-15T23:25:59.761324344Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 15 23:25:59 ha-138000 dockerd[1157]: time="2024-08-15T23:25:59.792363842Z" level=info msg="ignoring event" container=b9045283b928de777b7f4d15d6de8027f3786562c871977d957e2dd9d2a95b29 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 15 23:25:59 ha-138000 dockerd[1164]: time="2024-08-15T23:25:59.792820549Z" level=info msg="shim disconnected" id=b9045283b928de777b7f4d15d6de8027f3786562c871977d957e2dd9d2a95b29 namespace=moby
	Aug 15 23:25:59 ha-138000 dockerd[1164]: time="2024-08-15T23:25:59.792922467Z" level=warning msg="cleaning up after shim disconnected" id=b9045283b928de777b7f4d15d6de8027f3786562c871977d957e2dd9d2a95b29 namespace=moby
	Aug 15 23:25:59 ha-138000 dockerd[1164]: time="2024-08-15T23:25:59.792961894Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 15 23:26:22 ha-138000 dockerd[1164]: time="2024-08-15T23:26:22.968000771Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 15 23:26:22 ha-138000 dockerd[1164]: time="2024-08-15T23:26:22.968069651Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 15 23:26:22 ha-138000 dockerd[1164]: time="2024-08-15T23:26:22.968081146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:26:22 ha-138000 dockerd[1164]: time="2024-08-15T23:26:22.968181768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:26:32 ha-138000 dockerd[1164]: time="2024-08-15T23:26:32.980665382Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 15 23:26:32 ha-138000 dockerd[1164]: time="2024-08-15T23:26:32.980751317Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 15 23:26:32 ha-138000 dockerd[1164]: time="2024-08-15T23:26:32.980764715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:26:32 ha-138000 dockerd[1164]: time="2024-08-15T23:26:32.980890517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:26:53 ha-138000 dockerd[1157]: time="2024-08-15T23:26:53.459040666Z" level=info msg="ignoring event" container=3067be70dd5080b978b1c43a941f337a0fe62fd963692f6a5910b7e9981d8828 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 15 23:26:53 ha-138000 dockerd[1164]: time="2024-08-15T23:26:53.459615849Z" level=info msg="shim disconnected" id=3067be70dd5080b978b1c43a941f337a0fe62fd963692f6a5910b7e9981d8828 namespace=moby
	Aug 15 23:26:53 ha-138000 dockerd[1164]: time="2024-08-15T23:26:53.459664466Z" level=warning msg="cleaning up after shim disconnected" id=3067be70dd5080b978b1c43a941f337a0fe62fd963692f6a5910b7e9981d8828 namespace=moby
	Aug 15 23:26:53 ha-138000 dockerd[1164]: time="2024-08-15T23:26:53.459673170Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 15 23:26:54 ha-138000 dockerd[1157]: time="2024-08-15T23:26:54.466022234Z" level=info msg="ignoring event" container=065b34908ec9822585d14c60d989cc0d488ea0a61e4d398a5b59f18afaf682bd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 15 23:26:54 ha-138000 dockerd[1164]: time="2024-08-15T23:26:54.466561396Z" level=info msg="shim disconnected" id=065b34908ec9822585d14c60d989cc0d488ea0a61e4d398a5b59f18afaf682bd namespace=moby
	Aug 15 23:26:54 ha-138000 dockerd[1164]: time="2024-08-15T23:26:54.467070687Z" level=warning msg="cleaning up after shim disconnected" id=065b34908ec9822585d14c60d989cc0d488ea0a61e4d398a5b59f18afaf682bd namespace=moby
	Aug 15 23:26:54 ha-138000 dockerd[1164]: time="2024-08-15T23:26:54.467080180Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3067be70dd508       604f5db92eaa8                                                                                         36 seconds ago      Exited              kube-apiserver            3                   7152268f8eec4       kube-apiserver-ha-138000
	065b34908ec98       045733566833c                                                                                         46 seconds ago      Exited              kube-controller-manager   3                   4650262cc9c5d       kube-controller-manager-ha-138000
	efbc09be8eda5       38af8ddebf499                                                                                         2 minutes ago       Running             kube-vip                  0                   0c665afd15e6f       kube-vip-ha-138000
	589038a9e36bd       2e96e5913fc06                                                                                         2 minutes ago       Running             etcd                      1                   ec285d4826baa       etcd-ha-138000
	ac6935271595c       1766f54c897f0                                                                                         2 minutes ago       Running             kube-scheduler            1                   07c1c62e41d3a       kube-scheduler-ha-138000
	8f20284cd3969       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   4 minutes ago       Exited              busybox                   0                   bfc975a528b9e       busybox-7dff88458-wgww9
	42f5d82b00417       cbb01a7bd410d                                                                                         7 minutes ago       Exited              coredns                   0                   10891f8fbffcc       coredns-6f6b679f8f-dmgt5
	3e8b806ef4f33       cbb01a7bd410d                                                                                         7 minutes ago       Exited              coredns                   0                   096ab15603b01       coredns-6f6b679f8f-zc8jj
	6a1122913bb18       6e38f40d628db                                                                                         7 minutes ago       Exited              storage-provisioner       0                   e30dde4a5a10d       storage-provisioner
	c2a16126718b3       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              7 minutes ago       Exited              kindnet-cni               0                   e260a94a203af       kindnet-77dc6
	fc2e141007efb       ad83b2ca7b09e                                                                                         7 minutes ago       Exited              kube-proxy                0                   5b40cdd6b2c24       kube-proxy-cznkn
	e919017e14bb9       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Exited              kube-vip                  0                   db3c88138b89a       kube-vip-ha-138000
	7c25fb975759b       1766f54c897f0                                                                                         7 minutes ago       Exited              kube-scheduler            0                   edd6b77fdd102       kube-scheduler-ha-138000
	0cde5d8b93f58       2e96e5913fc06                                                                                         7 minutes ago       Exited              etcd                      0                   d0d07c194103e       etcd-ha-138000
	
	
	==> coredns [3e8b806ef4f3] <==
	[INFO] 10.244.2.2:44773 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000075522s
	[INFO] 10.244.2.2:53805 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000098349s
	[INFO] 10.244.2.2:34369 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000122495s
	[INFO] 10.244.0.4:59671 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000077646s
	[INFO] 10.244.0.4:41185 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000079139s
	[INFO] 10.244.0.4:42405 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000092065s
	[INFO] 10.244.0.4:54373 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000049998s
	[INFO] 10.244.0.4:57169 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000050383s
	[INFO] 10.244.0.4:37825 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000085108s
	[INFO] 10.244.1.2:59685 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000072268s
	[INFO] 10.244.1.2:32923 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073054s
	[INFO] 10.244.2.2:50876 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000068102s
	[INFO] 10.244.2.2:54719 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000762s
	[INFO] 10.244.0.4:57395 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000091608s
	[INFO] 10.244.0.4:37936 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000031052s
	[INFO] 10.244.1.2:58408 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000088888s
	[INFO] 10.244.1.2:42731 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000114857s
	[INFO] 10.244.1.2:41638 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000082664s
	[INFO] 10.244.2.2:52666 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000092331s
	[INFO] 10.244.2.2:41501 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000093116s
	[INFO] 10.244.0.4:48200 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000075447s
	[INFO] 10.244.0.4:35056 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000091854s
	[INFO] 10.244.0.4:36257 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000057922s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [42f5d82b0041] <==
	[INFO] 10.244.1.2:50104 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.009876264s
	[INFO] 10.244.0.4:33653 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115506s
	[INFO] 10.244.0.4:45180 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000042438s
	[INFO] 10.244.1.2:60312 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000068925s
	[INFO] 10.244.1.2:38521 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000124425s
	[INFO] 10.244.1.2:51675 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000125646s
	[INFO] 10.244.1.2:33974 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000078827s
	[INFO] 10.244.2.2:38966 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000078816s
	[INFO] 10.244.2.2:56056 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000620092s
	[INFO] 10.244.2.2:32787 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000109221s
	[INFO] 10.244.2.2:55701 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000039601s
	[INFO] 10.244.0.4:52543 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000083971s
	[INFO] 10.244.0.4:55050 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000146353s
	[INFO] 10.244.1.2:52165 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100415s
	[INFO] 10.244.1.2:41123 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000060755s
	[INFO] 10.244.2.2:56460 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000087503s
	[INFO] 10.244.2.2:36407 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00009778s
	[INFO] 10.244.0.4:40764 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000037536s
	[INFO] 10.244.0.4:58473 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000029335s
	[INFO] 10.244.1.2:38640 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000118481s
	[INFO] 10.244.2.2:46151 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000117088s
	[INFO] 10.244.2.2:34054 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000108858s
	[INFO] 10.244.0.4:56735 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000069666s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0815 23:27:08.792324    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0815 23:27:08.794253    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0815 23:27:08.796043    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0815 23:27:08.797791    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0815 23:27:08.799168    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.035894] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.007974] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.668520] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.007326] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.770013] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +1.360688] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000015] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +1.372641] systemd-fstab-generator[478]: Ignoring "noauto" option for root device
	[  +0.100663] systemd-fstab-generator[490]: Ignoring "noauto" option for root device
	[  +1.991571] systemd-fstab-generator[1087]: Ignoring "noauto" option for root device
	[  +0.238035] systemd-fstab-generator[1123]: Ignoring "noauto" option for root device
	[  +0.057754] kauditd_printk_skb: 101 callbacks suppressed
	[  +0.054898] systemd-fstab-generator[1135]: Ignoring "noauto" option for root device
	[  +0.128367] systemd-fstab-generator[1149]: Ignoring "noauto" option for root device
	[  +2.481651] systemd-fstab-generator[1367]: Ignoring "noauto" option for root device
	[  +0.106020] systemd-fstab-generator[1379]: Ignoring "noauto" option for root device
	[  +0.099803] systemd-fstab-generator[1391]: Ignoring "noauto" option for root device
	[  +0.128219] systemd-fstab-generator[1406]: Ignoring "noauto" option for root device
	[  +0.443324] systemd-fstab-generator[1569]: Ignoring "noauto" option for root device
	[  +6.920369] kauditd_printk_skb: 212 callbacks suppressed
	[Aug15 23:25] kauditd_printk_skb: 40 callbacks suppressed
	
	
	==> etcd [0cde5d8b93f5] <==
	2024/08/15 23:24:23 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-15T23:24:23.473964Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"645.370724ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllerrevisions/\" range_end:\"/registry/controllerrevisions0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-08-15T23:24:23.473975Z","caller":"traceutil/trace.go:171","msg":"trace[1846477602] range","detail":"{range_begin:/registry/controllerrevisions/; range_end:/registry/controllerrevisions0; }","duration":"645.382858ms","start":"2024-08-15T23:24:22.828589Z","end":"2024-08-15T23:24:23.473972Z","steps":["trace[1846477602] 'agreement among raft nodes before linearized reading'  (duration: 645.370611ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T23:24:23.473985Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-15T23:24:22.828583Z","time spent":"645.398405ms","remote":"127.0.0.1:48088","response type":"/etcdserverpb.KV/Range","request count":0,"request size":66,"response count":0,"response size":0,"request content":"key:\"/registry/controllerrevisions/\" range_end:\"/registry/controllerrevisions0\" count_only:true "}
	2024/08/15 23:24:23 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-15T23:24:23.523484Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-15T23:24:23.523513Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-15T23:24:23.523614Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"b8c6c7563d17d844","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-15T23:24:23.526296Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"2399e7dba5b18dfe"}
	{"level":"info","ts":"2024-08-15T23:24:23.526315Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"2399e7dba5b18dfe"}
	{"level":"info","ts":"2024-08-15T23:24:23.526356Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"2399e7dba5b18dfe"}
	{"level":"info","ts":"2024-08-15T23:24:23.526410Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"2399e7dba5b18dfe"}
	{"level":"info","ts":"2024-08-15T23:24:23.526458Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"2399e7dba5b18dfe"}
	{"level":"info","ts":"2024-08-15T23:24:23.526483Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"2399e7dba5b18dfe"}
	{"level":"info","ts":"2024-08-15T23:24:23.526491Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"2399e7dba5b18dfe"}
	{"level":"info","ts":"2024-08-15T23:24:23.526495Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"c8daa22dc1df7d56"}
	{"level":"info","ts":"2024-08-15T23:24:23.526502Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"c8daa22dc1df7d56"}
	{"level":"info","ts":"2024-08-15T23:24:23.526513Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"c8daa22dc1df7d56"}
	{"level":"info","ts":"2024-08-15T23:24:23.526797Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c8daa22dc1df7d56"}
	{"level":"info","ts":"2024-08-15T23:24:23.526821Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c8daa22dc1df7d56"}
	{"level":"info","ts":"2024-08-15T23:24:23.526843Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c8daa22dc1df7d56"}
	{"level":"info","ts":"2024-08-15T23:24:23.526851Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"c8daa22dc1df7d56"}
	{"level":"info","ts":"2024-08-15T23:24:23.528360Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-08-15T23:24:23.528429Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-08-15T23:24:23.528442Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-138000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.5:2380"],"advertise-client-urls":["https://192.169.0.5:2379"]}
	
	
	==> etcd [589038a9e36b] <==
	{"level":"info","ts":"2024-08-15T23:27:05.153366Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-08-15T23:27:05.153380Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1864] sent MsgPreVote request to 2399e7dba5b18dfe at term 2"}
	{"level":"info","ts":"2024-08-15T23:27:05.153385Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1864] sent MsgPreVote request to c8daa22dc1df7d56 at term 2"}
	{"level":"warn","ts":"2024-08-15T23:27:05.231898Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740419292119070,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-15T23:27:05.732233Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740419292119070,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-15T23:27:06.233314Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740419292119070,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-15T23:27:06.734497Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740419292119070,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-08-15T23:27:06.752793Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-15T23:27:06.752884Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-15T23:27:06.752903Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-08-15T23:27:06.752922Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1864] sent MsgPreVote request to 2399e7dba5b18dfe at term 2"}
	{"level":"info","ts":"2024-08-15T23:27:06.752932Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1864] sent MsgPreVote request to c8daa22dc1df7d56 at term 2"}
	{"level":"warn","ts":"2024-08-15T23:27:07.235677Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740419292119070,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-15T23:27:07.241633Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"2399e7dba5b18dfe","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T23:27:07.241748Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c8daa22dc1df7d56","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-08-15T23:27:07.241772Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"2399e7dba5b18dfe","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T23:27:07.241827Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"c8daa22dc1df7d56","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-08-15T23:27:07.738731Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740419292119070,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-15T23:27:08.238881Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740419292119070,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-08-15T23:27:08.352182Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-15T23:27:08.352228Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-15T23:27:08.352239Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-08-15T23:27:08.352250Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1864] sent MsgPreVote request to 2399e7dba5b18dfe at term 2"}
	{"level":"info","ts":"2024-08-15T23:27:08.352255Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1864] sent MsgPreVote request to c8daa22dc1df7d56 at term 2"}
	{"level":"warn","ts":"2024-08-15T23:27:08.739830Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740419292119070,"retry-timeout":"500ms"}
	
	
	==> kernel <==
	 23:27:09 up 2 min,  0 users,  load average: 0.18, 0.14, 0.06
	Linux ha-138000 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [c2a16126718b] <==
	I0815 23:23:47.704130       1 main.go:322] Node ha-138000-m04 has CIDR [10.244.3.0/24] 
	I0815 23:23:57.712115       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0815 23:23:57.712139       1 main.go:299] handling current node
	I0815 23:23:57.712152       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0815 23:23:57.712157       1 main.go:322] Node ha-138000-m02 has CIDR [10.244.1.0/24] 
	I0815 23:23:57.712420       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0815 23:23:57.712543       1 main.go:322] Node ha-138000-m03 has CIDR [10.244.2.0/24] 
	I0815 23:23:57.712720       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0815 23:23:57.712823       1 main.go:322] Node ha-138000-m04 has CIDR [10.244.3.0/24] 
	I0815 23:24:07.712424       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0815 23:24:07.712474       1 main.go:299] handling current node
	I0815 23:24:07.712488       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0815 23:24:07.712494       1 main.go:322] Node ha-138000-m02 has CIDR [10.244.1.0/24] 
	I0815 23:24:07.712623       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0815 23:24:07.712704       1 main.go:322] Node ha-138000-m03 has CIDR [10.244.2.0/24] 
	I0815 23:24:07.712814       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0815 23:24:07.712851       1 main.go:322] Node ha-138000-m04 has CIDR [10.244.3.0/24] 
	I0815 23:24:17.705680       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0815 23:24:17.705716       1 main.go:322] Node ha-138000-m02 has CIDR [10.244.1.0/24] 
	I0815 23:24:17.706225       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0815 23:24:17.706282       1 main.go:322] Node ha-138000-m03 has CIDR [10.244.2.0/24] 
	I0815 23:24:17.706514       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0815 23:24:17.706582       1 main.go:322] Node ha-138000-m04 has CIDR [10.244.3.0/24] 
	I0815 23:24:17.706957       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0815 23:24:17.707108       1 main.go:299] handling current node
	
	
	==> kube-apiserver [3067be70dd50] <==
	I0815 23:26:33.075028       1 options.go:228] external host was not specified, using 192.169.0.5
	I0815 23:26:33.076272       1 server.go:142] Version: v1.31.0
	I0815 23:26:33.076309       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 23:26:33.428918       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0815 23:26:33.438505       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0815 23:26:33.441482       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0815 23:26:33.441514       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0815 23:26:33.441724       1 instance.go:232] Using reconciler: lease
	W0815 23:26:53.430994       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0815 23:26:53.431041       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0815 23:26:53.442642       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0815 23:26:53.442682       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [065b34908ec9] <==
	I0815 23:26:23.285232       1 serving.go:386] Generated self-signed cert in-memory
	I0815 23:26:23.782222       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0815 23:26:23.782291       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 23:26:23.783394       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0815 23:26:23.783594       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0815 23:26:23.783712       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0815 23:26:23.783867       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0815 23:26:54.448628       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.169.0.5:8443/healthz\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:34400->192.169.0.5:8443: read: connection reset by peer"
	
	
	==> kube-proxy [fc2e141007ef] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 23:19:33.922056       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0815 23:19:33.939645       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0815 23:19:33.939881       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 23:19:33.966815       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0815 23:19:33.966963       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0815 23:19:33.967061       1 server_linux.go:169] "Using iptables Proxier"
	I0815 23:19:33.969119       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 23:19:33.969437       1 server.go:483] "Version info" version="v1.31.0"
	I0815 23:19:33.969466       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 23:19:33.970289       1 config.go:197] "Starting service config controller"
	I0815 23:19:33.970403       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 23:19:33.970441       1 config.go:104] "Starting endpoint slice config controller"
	I0815 23:19:33.970446       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 23:19:33.970870       1 config.go:326] "Starting node config controller"
	I0815 23:19:33.970895       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 23:19:34.070930       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0815 23:19:34.070930       1 shared_informer.go:320] Caches are synced for node config
	I0815 23:19:34.070944       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [7c25fb975759] <==
	E0815 23:19:26.587225       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0815 23:19:27.147361       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0815 23:22:08.672878       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-t5sdh\": pod busybox-7dff88458-t5sdh is already assigned to node \"ha-138000-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-t5sdh" node="ha-138000-m03"
	E0815 23:22:08.672963       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod b81fe134-5ef5-4074-920a-105e4bd801be(default/busybox-7dff88458-t5sdh) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-t5sdh"
	E0815 23:22:08.672983       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-t5sdh\": pod busybox-7dff88458-t5sdh is already assigned to node \"ha-138000-m03\"" pod="default/busybox-7dff88458-t5sdh"
	I0815 23:22:08.673000       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-t5sdh" node="ha-138000-m03"
	E0815 23:22:08.673278       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-wgww9\": pod busybox-7dff88458-wgww9 is already assigned to node \"ha-138000\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-wgww9" node="ha-138000"
	E0815 23:22:08.673460       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod b8eb799e-e761-4647-8aae-388c38bc936e(default/busybox-7dff88458-wgww9) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-wgww9"
	E0815 23:22:08.673519       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-wgww9\": pod busybox-7dff88458-wgww9 is already assigned to node \"ha-138000\"" pod="default/busybox-7dff88458-wgww9"
	I0815 23:22:08.673609       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-wgww9" node="ha-138000"
	E0815 23:22:36.177995       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-qpth7\": pod kube-proxy-qpth7 is already assigned to node \"ha-138000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-qpth7" node="ha-138000-m04"
	E0815 23:22:36.178149       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod a343f80b-0fe9-4c88-9782-5fbf9a6170d1(kube-system/kube-proxy-qpth7) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-qpth7"
	E0815 23:22:36.178181       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-qpth7\": pod kube-proxy-qpth7 is already assigned to node \"ha-138000-m04\"" pod="kube-system/kube-proxy-qpth7"
	I0815 23:22:36.178207       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-qpth7" node="ha-138000-m04"
	E0815 23:22:36.181318       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-m887r\": pod kindnet-m887r is already assigned to node \"ha-138000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-m887r" node="ha-138000-m04"
	E0815 23:22:36.181425       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod ba31865b-c712-47a8-9fd8-06420270ac8b(kube-system/kindnet-m887r) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-m887r"
	E0815 23:22:36.181440       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-m887r\": pod kindnet-m887r is already assigned to node \"ha-138000-m04\"" pod="kube-system/kindnet-m887r"
	I0815 23:22:36.181451       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-m887r" node="ha-138000-m04"
	E0815 23:22:36.197728       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-xc8mj\": pod kube-proxy-xc8mj is already assigned to node \"ha-138000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-xc8mj" node="ha-138000-m04"
	E0815 23:22:36.197783       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 661886ed-7ec0-401d-b893-4dd74852e477(kube-system/kube-proxy-xc8mj) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-xc8mj"
	E0815 23:22:36.197797       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-xc8mj\": pod kube-proxy-xc8mj is already assigned to node \"ha-138000-m04\"" pod="kube-system/kube-proxy-xc8mj"
	I0815 23:22:36.197815       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-xc8mj" node="ha-138000-m04"
	I0815 23:24:23.554288       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0815 23:24:23.554620       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0815 23:24:23.554869       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [ac6935271595] <==
	E0815 23:26:28.518254       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0815 23:26:30.367151       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0815 23:26:30.367350       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0815 23:26:32.176181       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.169.0.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0815 23:26:32.176274       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.169.0.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0815 23:26:43.928456       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.169.0.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	E0815 23:26:43.929071       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.169.0.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError"
	W0815 23:26:46.274827       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	E0815 23:26:46.275005       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError"
	W0815 23:26:50.098021       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.169.0.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	E0815 23:26:50.098077       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.169.0.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError"
	W0815 23:26:52.088220       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	E0815 23:26:52.088690       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError"
	W0815 23:26:52.981733       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.169.0.5:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	E0815 23:26:52.981795       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.169.0.5:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError"
	W0815 23:26:54.449050       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:59210->192.169.0.5:8443: read: connection reset by peer
	E0815 23:26:54.449275       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:59210->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0815 23:26:54.449924       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.169.0.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:59206->192.169.0.5:8443: read: connection reset by peer
	E0815 23:26:54.450154       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.169.0.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:59206->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0815 23:26:54.450346       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:59224->192.169.0.5:8443: read: connection reset by peer
	E0815 23:26:54.450494       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:59224->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0815 23:26:54.450863       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.169.0.5:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:59196->192.169.0.5:8443: read: connection reset by peer
	E0815 23:26:54.451005       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.169.0.5:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:59196->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0815 23:26:54.451798       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.169.0.5:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:59190->192.169.0.5:8443: read: connection reset by peer
	E0815 23:26:54.451950       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.169.0.5:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:59190->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	
	
	==> kubelet <==
	Aug 15 23:26:55 ha-138000 kubelet[1576]: I0815 23:26:55.245052    1576 scope.go:117] "RemoveContainer" containerID="065b34908ec9822585d14c60d989cc0d488ea0a61e4d398a5b59f18afaf682bd"
	Aug 15 23:26:55 ha-138000 kubelet[1576]: E0815 23:26:55.245163    1576 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-138000_kube-system(ed196a03081880609aebd781f662c0b9)\"" pod="kube-system/kube-controller-manager-ha-138000" podUID="ed196a03081880609aebd781f662c0b9"
	Aug 15 23:26:55 ha-138000 kubelet[1576]: I0815 23:26:55.486843    1576 scope.go:117] "RemoveContainer" containerID="3067be70dd5080b978b1c43a941f337a0fe62fd963692f6a5910b7e9981d8828"
	Aug 15 23:26:55 ha-138000 kubelet[1576]: E0815 23:26:55.487178    1576 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-ha-138000_kube-system(8df20a622868f60a60f4423e49478fa2)\"" pod="kube-system/kube-apiserver-ha-138000" podUID="8df20a622868f60a60f4423e49478fa2"
	Aug 15 23:26:59 ha-138000 kubelet[1576]: I0815 23:26:59.740806    1576 kubelet_node_status.go:72] "Attempting to register node" node="ha-138000"
	Aug 15 23:26:59 ha-138000 kubelet[1576]: I0815 23:26:59.764458    1576 scope.go:117] "RemoveContainer" containerID="065b34908ec9822585d14c60d989cc0d488ea0a61e4d398a5b59f18afaf682bd"
	Aug 15 23:26:59 ha-138000 kubelet[1576]: E0815 23:26:59.764809    1576 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-138000_kube-system(ed196a03081880609aebd781f662c0b9)\"" pod="kube-system/kube-controller-manager-ha-138000" podUID="ed196a03081880609aebd781f662c0b9"
	Aug 15 23:26:59 ha-138000 kubelet[1576]: E0815 23:26:59.937430    1576 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-138000\" not found"
	Aug 15 23:27:00 ha-138000 kubelet[1576]: I0815 23:27:00.860041    1576 scope.go:117] "RemoveContainer" containerID="065b34908ec9822585d14c60d989cc0d488ea0a61e4d398a5b59f18afaf682bd"
	Aug 15 23:27:00 ha-138000 kubelet[1576]: E0815 23:27:00.860271    1576 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-138000_kube-system(ed196a03081880609aebd781f662c0b9)\"" pod="kube-system/kube-controller-manager-ha-138000" podUID="ed196a03081880609aebd781f662c0b9"
	Aug 15 23:27:01 ha-138000 kubelet[1576]: I0815 23:27:01.392106    1576 scope.go:117] "RemoveContainer" containerID="3067be70dd5080b978b1c43a941f337a0fe62fd963692f6a5910b7e9981d8828"
	Aug 15 23:27:01 ha-138000 kubelet[1576]: E0815 23:27:01.392292    1576 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-ha-138000_kube-system(8df20a622868f60a60f4423e49478fa2)\"" pod="kube-system/kube-apiserver-ha-138000" podUID="8df20a622868f60a60f4423e49478fa2"
	Aug 15 23:27:01 ha-138000 kubelet[1576]: E0815 23:27:01.956391    1576 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.169.0.254:8443: connect: no route to host" node="ha-138000"
	Aug 15 23:27:01 ha-138000 kubelet[1576]: W0815 23:27:01.956546    1576 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.169.0.254:8443: connect: no route to host
	Aug 15 23:27:01 ha-138000 kubelet[1576]: E0815 23:27:01.956664    1576 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	Aug 15 23:27:01 ha-138000 kubelet[1576]: E0815 23:27:01.957086    1576 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-138000?timeout=10s\": dial tcp 192.169.0.254:8443: connect: no route to host" interval="7s"
	Aug 15 23:27:05 ha-138000 kubelet[1576]: E0815 23:27:05.022839    1576 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.169.0.254:8443: connect: no route to host" event="&Event{ObjectMeta:{ha-138000.17ec0a7d1e5ef862  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-138000,UID:ha-138000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ha-138000,},FirstTimestamp:2024-08-15 23:24:49.872787554 +0000 UTC m=+0.202964169,LastTimestamp:2024-08-15 23:24:49.872787554 +0000 UTC m=+0.202964169,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-138000,}"
	Aug 15 23:27:05 ha-138000 kubelet[1576]: E0815 23:27:05.023401    1576 event.go:307] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{ha-138000.17ec0a7d1e5ef862  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-138000,UID:ha-138000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ha-138000,},FirstTimestamp:2024-08-15 23:24:49.872787554 +0000 UTC m=+0.202964169,LastTimestamp:2024-08-15 23:24:49.872787554 +0000 UTC m=+0.202964169,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-138000,}"
	Aug 15 23:27:08 ha-138000 kubelet[1576]: W0815 23:27:08.092984    1576 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.169.0.254:8443: connect: no route to host
	Aug 15 23:27:08 ha-138000 kubelet[1576]: E0815 23:27:08.093097    1576 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.169.0.254:8443: connect: no route to host" event="&Event{ObjectMeta:{ha-138000.17ec0a7d1fe7353b  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-138000,UID:ha-138000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ha-138000 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ha-138000,},FirstTimestamp:2024-08-15 23:24:49.898493243 +0000 UTC m=+0.228669857,LastTimestamp:2024-08-15 23:24:49.898493243 +0000 UTC m=+0.228669857,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-138000,}"
	Aug 15 23:27:08 ha-138000 kubelet[1576]: E0815 23:27:08.093302    1576 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	Aug 15 23:27:08 ha-138000 kubelet[1576]: W0815 23:27:08.092984    1576 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-138000&limit=500&resourceVersion=0": dial tcp 192.169.0.254:8443: connect: no route to host
	Aug 15 23:27:08 ha-138000 kubelet[1576]: E0815 23:27:08.093353    1576 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-138000&limit=500&resourceVersion=0\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	Aug 15 23:27:08 ha-138000 kubelet[1576]: I0815 23:27:08.958789    1576 kubelet_node_status.go:72] "Attempting to register node" node="ha-138000"
	Aug 15 23:27:09 ha-138000 kubelet[1576]: E0815 23:27:09.941545    1576 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-138000\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-138000 -n ha-138000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-138000 -n ha-138000: exit status 2 (151.736746ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-138000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (2.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (163.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 stop -v=7 --alsologtostderr
E0815 16:27:37.837583    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/functional-506000/client.crt: no such file or directory" logger="UnhandledError"
E0815 16:28:05.544208    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/functional-506000/client.crt: no such file or directory" logger="UnhandledError"
E0815 16:28:57.653017    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/addons-640000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-darwin-amd64 -p ha-138000 stop -v=7 --alsologtostderr: (2m43.770020526s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-138000 status -v=7 --alsologtostderr: exit status 7 (101.838992ms)

                                                
                                                
-- stdout --
	ha-138000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-138000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-138000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-138000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 16:29:53.863779    3839 out.go:345] Setting OutFile to fd 1 ...
	I0815 16:29:53.863975    3839 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:29:53.863980    3839 out.go:358] Setting ErrFile to fd 2...
	I0815 16:29:53.863984    3839 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:29:53.864170    3839 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-977/.minikube/bin
	I0815 16:29:53.864360    3839 out.go:352] Setting JSON to false
	I0815 16:29:53.864380    3839 mustload.go:65] Loading cluster: ha-138000
	I0815 16:29:53.864416    3839 notify.go:220] Checking for updates...
	I0815 16:29:53.864688    3839 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:29:53.864701    3839 status.go:255] checking status of ha-138000 ...
	I0815 16:29:53.865067    3839 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:29:53.865119    3839 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:29:53.874207    3839 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52211
	I0815 16:29:53.874644    3839 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:29:53.875087    3839 main.go:141] libmachine: Using API Version  1
	I0815 16:29:53.875105    3839 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:29:53.875361    3839 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:29:53.875481    3839 main.go:141] libmachine: (ha-138000) Calling .GetState
	I0815 16:29:53.875591    3839 main.go:141] libmachine: (ha-138000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:29:53.875662    3839 main.go:141] libmachine: (ha-138000) DBG | hyperkit pid from json: 3662
	I0815 16:29:53.876551    3839 main.go:141] libmachine: (ha-138000) DBG | hyperkit pid 3662 missing from process table
	I0815 16:29:53.876601    3839 status.go:330] ha-138000 host status = "Stopped" (err=<nil>)
	I0815 16:29:53.876608    3839 status.go:343] host is not running, skipping remaining checks
	I0815 16:29:53.876615    3839 status.go:257] ha-138000 status: &{Name:ha-138000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 16:29:53.876638    3839 status.go:255] checking status of ha-138000-m02 ...
	I0815 16:29:53.876888    3839 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:29:53.876909    3839 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:29:53.885491    3839 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52213
	I0815 16:29:53.885839    3839 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:29:53.886174    3839 main.go:141] libmachine: Using API Version  1
	I0815 16:29:53.886188    3839 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:29:53.886394    3839 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:29:53.886514    3839 main.go:141] libmachine: (ha-138000-m02) Calling .GetState
	I0815 16:29:53.886596    3839 main.go:141] libmachine: (ha-138000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:29:53.886667    3839 main.go:141] libmachine: (ha-138000-m02) DBG | hyperkit pid from json: 3670
	I0815 16:29:53.887588    3839 status.go:330] ha-138000-m02 host status = "Stopped" (err=<nil>)
	I0815 16:29:53.887587    3839 main.go:141] libmachine: (ha-138000-m02) DBG | hyperkit pid 3670 missing from process table
	I0815 16:29:53.887597    3839 status.go:343] host is not running, skipping remaining checks
	I0815 16:29:53.887603    3839 status.go:257] ha-138000-m02 status: &{Name:ha-138000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 16:29:53.887613    3839 status.go:255] checking status of ha-138000-m03 ...
	I0815 16:29:53.887872    3839 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:29:53.887895    3839 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:29:53.896545    3839 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52215
	I0815 16:29:53.896900    3839 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:29:53.897228    3839 main.go:141] libmachine: Using API Version  1
	I0815 16:29:53.897238    3839 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:29:53.897451    3839 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:29:53.897564    3839 main.go:141] libmachine: (ha-138000-m03) Calling .GetState
	I0815 16:29:53.897648    3839 main.go:141] libmachine: (ha-138000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:29:53.897728    3839 main.go:141] libmachine: (ha-138000-m03) DBG | hyperkit pid from json: 3119
	I0815 16:29:53.898622    3839 main.go:141] libmachine: (ha-138000-m03) DBG | hyperkit pid 3119 missing from process table
	I0815 16:29:53.898643    3839 status.go:330] ha-138000-m03 host status = "Stopped" (err=<nil>)
	I0815 16:29:53.898650    3839 status.go:343] host is not running, skipping remaining checks
	I0815 16:29:53.898656    3839 status.go:257] ha-138000-m03 status: &{Name:ha-138000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 16:29:53.898675    3839 status.go:255] checking status of ha-138000-m04 ...
	I0815 16:29:53.898954    3839 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:29:53.898976    3839 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:29:53.907444    3839 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52217
	I0815 16:29:53.907793    3839 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:29:53.908152    3839 main.go:141] libmachine: Using API Version  1
	I0815 16:29:53.908170    3839 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:29:53.908372    3839 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:29:53.908499    3839 main.go:141] libmachine: (ha-138000-m04) Calling .GetState
	I0815 16:29:53.908597    3839 main.go:141] libmachine: (ha-138000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:29:53.908674    3839 main.go:141] libmachine: (ha-138000-m04) DBG | hyperkit pid from json: 3240
	I0815 16:29:53.909575    3839 main.go:141] libmachine: (ha-138000-m04) DBG | hyperkit pid 3240 missing from process table
	I0815 16:29:53.909597    3839 status.go:330] ha-138000-m04 host status = "Stopped" (err=<nil>)
	I0815 16:29:53.909604    3839 status.go:343] host is not running, skipping remaining checks
	I0815 16:29:53.909612    3839 status.go:257] ha-138000-m04 status: &{Name:ha-138000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-amd64 -p ha-138000 status -v=7 --alsologtostderr": ha-138000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-138000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-138000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-138000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-amd64 -p ha-138000 status -v=7 --alsologtostderr": ha-138000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-138000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-138000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-138000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-amd64 -p ha-138000 status -v=7 --alsologtostderr": ha-138000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-138000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-138000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-138000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-138000 -n ha-138000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-138000 -n ha-138000: exit status 7 (68.267453ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-138000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (163.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (177.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-138000 --wait=true -v=7 --alsologtostderr --driver=hyperkit 
E0815 16:30:20.719256    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/addons-640000/client.crt: no such file or directory" logger="UnhandledError"
E0815 16:32:37.858557    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/functional-506000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-darwin-amd64 start -p ha-138000 --wait=true -v=7 --alsologtostderr --driver=hyperkit : (2m52.921453243s)
ha_test.go:566: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 status -v=7 --alsologtostderr
ha_test.go:571: status says not two control-plane nodes are present: args "out/minikube-darwin-amd64 -p ha-138000 status -v=7 --alsologtostderr": ha-138000
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-138000-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-138000-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-138000-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:574: status says not three hosts are running: args "out/minikube-darwin-amd64 -p ha-138000 status -v=7 --alsologtostderr": ha-138000
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-138000-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-138000-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-138000-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:577: status says not three kubelets are running: args "out/minikube-darwin-amd64 -p ha-138000 status -v=7 --alsologtostderr": ha-138000
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-138000-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-138000-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-138000-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:580: status says not two apiservers are running: args "out/minikube-darwin-amd64 -p ha-138000 status -v=7 --alsologtostderr": ha-138000
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-138000-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-138000-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-138000-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
ha_test.go:597: expected 3 nodes Ready status to be True, got 
-- stdout --
	' True
	 True
	 True
	 True
	'

                                                
                                                
-- /stdout --
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-138000 -n ha-138000
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-138000 logs -n 25: (3.469447794s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-138000 cp ha-138000-m03:/home/docker/cp-test.txt                                                                          | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m04:/home/docker/cp-test_ha-138000-m03_ha-138000-m04.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n                                                                                                             | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n ha-138000-m04 sudo cat                                                                                      | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | /home/docker/cp-test_ha-138000-m03_ha-138000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-138000 cp testdata/cp-test.txt                                                                                            | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n                                                                                                             | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-138000 cp ha-138000-m04:/home/docker/cp-test.txt                                                                          | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile1119105544/001/cp-test_ha-138000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n                                                                                                             | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-138000 cp ha-138000-m04:/home/docker/cp-test.txt                                                                          | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000:/home/docker/cp-test_ha-138000-m04_ha-138000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n                                                                                                             | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n ha-138000 sudo cat                                                                                          | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | /home/docker/cp-test_ha-138000-m04_ha-138000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-138000 cp ha-138000-m04:/home/docker/cp-test.txt                                                                          | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m02:/home/docker/cp-test_ha-138000-m04_ha-138000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n                                                                                                             | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n ha-138000-m02 sudo cat                                                                                      | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | /home/docker/cp-test_ha-138000-m04_ha-138000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-138000 cp ha-138000-m04:/home/docker/cp-test.txt                                                                          | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m03:/home/docker/cp-test_ha-138000-m04_ha-138000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n                                                                                                             | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n ha-138000-m03 sudo cat                                                                                      | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | /home/docker/cp-test_ha-138000-m04_ha-138000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-138000 node stop m02 -v=7                                                                                                 | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-138000 node start m02 -v=7                                                                                                | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:24 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-138000 -v=7                                                                                                       | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:24 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | -p ha-138000 -v=7                                                                                                            | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:24 PDT | 15 Aug 24 16:24 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-138000 --wait=true -v=7                                                                                                | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:24 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-138000                                                                                                            | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:26 PDT |                     |
	| node    | ha-138000 node delete m03 -v=7                                                                                               | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:26 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | ha-138000 stop -v=7                                                                                                          | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:27 PDT | 15 Aug 24 16:29 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-138000 --wait=true                                                                                                     | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:29 PDT | 15 Aug 24 16:32 PDT |
	|         | -v=7 --alsologtostderr                                                                                                       |           |         |         |                     |                     |
	|         | --driver=hyperkit                                                                                                            |           |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 16:29:54
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 16:29:54.033682    3848 out.go:345] Setting OutFile to fd 1 ...
	I0815 16:29:54.033848    3848 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:29:54.033854    3848 out.go:358] Setting ErrFile to fd 2...
	I0815 16:29:54.033858    3848 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:29:54.034027    3848 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-977/.minikube/bin
	I0815 16:29:54.035457    3848 out.go:352] Setting JSON to false
	I0815 16:29:54.058003    3848 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":1765,"bootTime":1723762829,"procs":428,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0815 16:29:54.058095    3848 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 16:29:54.080014    3848 out.go:177] * [ha-138000] minikube v1.33.1 on Darwin 14.6.1
	I0815 16:29:54.122634    3848 out.go:177]   - MINIKUBE_LOCATION=19452
	I0815 16:29:54.122696    3848 notify.go:220] Checking for updates...
	I0815 16:29:54.164406    3848 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19452-977/kubeconfig
	I0815 16:29:54.185700    3848 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0815 16:29:54.206554    3848 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 16:29:54.227614    3848 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-977/.minikube
	I0815 16:29:54.248519    3848 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 16:29:54.270441    3848 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:29:54.271125    3848 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:29:54.271225    3848 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:29:54.280836    3848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52223
	I0815 16:29:54.281188    3848 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:29:54.281595    3848 main.go:141] libmachine: Using API Version  1
	I0815 16:29:54.281610    3848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:29:54.281823    3848 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:29:54.281934    3848 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:29:54.282121    3848 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 16:29:54.282360    3848 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:29:54.282379    3848 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:29:54.290749    3848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52225
	I0815 16:29:54.291068    3848 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:29:54.291384    3848 main.go:141] libmachine: Using API Version  1
	I0815 16:29:54.291393    3848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:29:54.291633    3848 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:29:54.291762    3848 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:29:54.320542    3848 out.go:177] * Using the hyperkit driver based on existing profile
	I0815 16:29:54.362577    3848 start.go:297] selected driver: hyperkit
	I0815 16:29:54.362603    3848 start.go:901] validating driver "hyperkit" against &{Name:ha-138000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-138000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclas
s:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 16:29:54.362832    3848 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 16:29:54.363029    3848 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 16:29:54.363230    3848 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19452-977/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0815 16:29:54.372833    3848 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0815 16:29:54.376641    3848 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:29:54.376661    3848 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0815 16:29:54.379303    3848 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 16:29:54.379340    3848 cni.go:84] Creating CNI manager for ""
	I0815 16:29:54.379348    3848 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0815 16:29:54.379445    3848 start.go:340] cluster config:
	{Name:ha-138000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-138000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-t
iller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 16:29:54.379558    3848 iso.go:125] acquiring lock: {Name:mkb93fc66b60be15dffa9eb2999a4be8d8d4d218 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 16:29:54.421457    3848 out.go:177] * Starting "ha-138000" primary control-plane node in "ha-138000" cluster
	I0815 16:29:54.442393    3848 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 16:29:54.442490    3848 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19452-977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0815 16:29:54.442517    3848 cache.go:56] Caching tarball of preloaded images
	I0815 16:29:54.442747    3848 preload.go:172] Found /Users/jenkins/minikube-integration/19452-977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0815 16:29:54.442766    3848 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 16:29:54.442942    3848 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/config.json ...
	I0815 16:29:54.443891    3848 start.go:360] acquireMachinesLock for ha-138000: {Name:mkac8034c8a60617271f2754a03dcaf74fc4fef7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 16:29:54.444072    3848 start.go:364] duration metric: took 141.088µs to acquireMachinesLock for "ha-138000"
	I0815 16:29:54.444120    3848 start.go:96] Skipping create...Using existing machine configuration
	I0815 16:29:54.444137    3848 fix.go:54] fixHost starting: 
	I0815 16:29:54.444553    3848 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:29:54.444588    3848 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:29:54.453701    3848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52227
	I0815 16:29:54.454060    3848 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:29:54.454408    3848 main.go:141] libmachine: Using API Version  1
	I0815 16:29:54.454428    3848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:29:54.454668    3848 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:29:54.454795    3848 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:29:54.454900    3848 main.go:141] libmachine: (ha-138000) Calling .GetState
	I0815 16:29:54.455015    3848 main.go:141] libmachine: (ha-138000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:29:54.455069    3848 main.go:141] libmachine: (ha-138000) DBG | hyperkit pid from json: 3662
	I0815 16:29:54.455998    3848 main.go:141] libmachine: (ha-138000) DBG | hyperkit pid 3662 missing from process table
	I0815 16:29:54.456024    3848 fix.go:112] recreateIfNeeded on ha-138000: state=Stopped err=<nil>
	I0815 16:29:54.456037    3848 main.go:141] libmachine: (ha-138000) Calling .DriverName
	W0815 16:29:54.456128    3848 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 16:29:54.477408    3848 out.go:177] * Restarting existing hyperkit VM for "ha-138000" ...
	I0815 16:29:54.498281    3848 main.go:141] libmachine: (ha-138000) Calling .Start
	I0815 16:29:54.498449    3848 main.go:141] libmachine: (ha-138000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:29:54.498522    3848 main.go:141] libmachine: (ha-138000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/hyperkit.pid
	I0815 16:29:54.498549    3848 main.go:141] libmachine: (ha-138000) DBG | Using UUID bf1b12d0-37a9-4c04-a028-0dd0a6dcd337
	I0815 16:29:54.612230    3848 main.go:141] libmachine: (ha-138000) DBG | Generated MAC 66:4d:cd:54:35:15
	I0815 16:29:54.612256    3848 main.go:141] libmachine: (ha-138000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000
	I0815 16:29:54.612403    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:54 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"bf1b12d0-37a9-4c04-a028-0dd0a6dcd337", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002a9530)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0815 16:29:54.612447    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:54 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"bf1b12d0-37a9-4c04-a028-0dd0a6dcd337", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002a9530)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0815 16:29:54.612479    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:54 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "bf1b12d0-37a9-4c04-a028-0dd0a6dcd337", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/ha-138000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/tty,log=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/bzimage,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/initrd,earlyprintk=serial l
oglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000"}
	I0815 16:29:54.612534    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:54 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U bf1b12d0-37a9-4c04-a028-0dd0a6dcd337 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/ha-138000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/tty,log=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/console-ring -f kexec,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/bzimage,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset noresto
re waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000"
	I0815 16:29:54.612554    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:54 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0815 16:29:54.613954    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:54 DEBUG: hyperkit: Pid is 3862
	I0815 16:29:54.614352    3848 main.go:141] libmachine: (ha-138000) DBG | Attempt 0
	I0815 16:29:54.614367    3848 main.go:141] libmachine: (ha-138000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:29:54.614458    3848 main.go:141] libmachine: (ha-138000) DBG | hyperkit pid from json: 3862
	I0815 16:29:54.615668    3848 main.go:141] libmachine: (ha-138000) DBG | Searching for 66:4d:cd:54:35:15 in /var/db/dhcpd_leases ...
	I0815 16:29:54.615762    3848 main.go:141] libmachine: (ha-138000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0815 16:29:54.615788    3848 main.go:141] libmachine: (ha-138000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66be8f71}
	I0815 16:29:54.615808    3848 main.go:141] libmachine: (ha-138000) DBG | Found match: 66:4d:cd:54:35:15
	I0815 16:29:54.615836    3848 main.go:141] libmachine: (ha-138000) DBG | IP: 192.169.0.5
	I0815 16:29:54.615932    3848 main.go:141] libmachine: (ha-138000) Calling .GetConfigRaw
	I0815 16:29:54.616670    3848 main.go:141] libmachine: (ha-138000) Calling .GetIP
	I0815 16:29:54.616859    3848 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/config.json ...
	I0815 16:29:54.617254    3848 machine.go:93] provisionDockerMachine start ...
	I0815 16:29:54.617264    3848 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:29:54.617414    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:29:54.617528    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:29:54.617607    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:29:54.617679    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:29:54.617801    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:29:54.617967    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:29:54.618192    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0815 16:29:54.618201    3848 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 16:29:54.621800    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:54 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0815 16:29:54.673574    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:54 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0815 16:29:54.674258    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0815 16:29:54.674277    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0815 16:29:54.674284    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0815 16:29:54.674293    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0815 16:29:55.057707    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:55 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0815 16:29:55.057723    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:55 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0815 16:29:55.172245    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:55 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0815 16:29:55.172277    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:55 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0815 16:29:55.172313    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:55 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0815 16:29:55.172333    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:55 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0815 16:29:55.173142    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:55 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0815 16:29:55.173153    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:55 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0815 16:30:00.749814    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:30:00 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0815 16:30:00.749867    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:30:00 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0815 16:30:00.749877    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:30:00 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0815 16:30:00.774690    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:30:00 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0815 16:30:05.697072    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 16:30:05.697084    3848 main.go:141] libmachine: (ha-138000) Calling .GetMachineName
	I0815 16:30:05.697230    3848 buildroot.go:166] provisioning hostname "ha-138000"
	I0815 16:30:05.697241    3848 main.go:141] libmachine: (ha-138000) Calling .GetMachineName
	I0815 16:30:05.697340    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:30:05.697431    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:30:05.697531    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:05.697615    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:05.697729    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:30:05.697864    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:30:05.698023    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0815 16:30:05.698032    3848 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-138000 && echo "ha-138000" | sudo tee /etc/hostname
	I0815 16:30:05.773271    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-138000
	
	I0815 16:30:05.773290    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:30:05.773430    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:30:05.773543    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:05.773660    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:05.773777    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:30:05.773935    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:30:05.774084    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0815 16:30:05.774095    3848 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-138000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-138000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-138000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 16:30:05.843913    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 16:30:05.843933    3848 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19452-977/.minikube CaCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19452-977/.minikube}
	I0815 16:30:05.843947    3848 buildroot.go:174] setting up certificates
	I0815 16:30:05.843955    3848 provision.go:84] configureAuth start
	I0815 16:30:05.843962    3848 main.go:141] libmachine: (ha-138000) Calling .GetMachineName
	I0815 16:30:05.844101    3848 main.go:141] libmachine: (ha-138000) Calling .GetIP
	I0815 16:30:05.844215    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:30:05.844315    3848 provision.go:143] copyHostCerts
	I0815 16:30:05.844350    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem
	I0815 16:30:05.844436    3848 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem, removing ...
	I0815 16:30:05.844445    3848 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem
	I0815 16:30:05.844633    3848 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem (1082 bytes)
	I0815 16:30:05.844853    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem
	I0815 16:30:05.844900    3848 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem, removing ...
	I0815 16:30:05.844906    3848 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem
	I0815 16:30:05.844989    3848 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem (1123 bytes)
	I0815 16:30:05.845165    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem
	I0815 16:30:05.845202    3848 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem, removing ...
	I0815 16:30:05.845207    3848 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem
	I0815 16:30:05.845283    3848 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem (1679 bytes)
	I0815 16:30:05.845432    3848 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem org=jenkins.ha-138000 san=[127.0.0.1 192.169.0.5 ha-138000 localhost minikube]
	I0815 16:30:06.272971    3848 provision.go:177] copyRemoteCerts
	I0815 16:30:06.273031    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 16:30:06.273048    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:30:06.273185    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:30:06.273289    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:06.273389    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:30:06.273476    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/id_rsa Username:docker}
	I0815 16:30:06.313671    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0815 16:30:06.313804    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 16:30:06.335207    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0815 16:30:06.335264    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0815 16:30:06.355028    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0815 16:30:06.355085    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 16:30:06.374691    3848 provision.go:87] duration metric: took 530.722569ms to configureAuth
	I0815 16:30:06.374705    3848 buildroot.go:189] setting minikube options for container-runtime
	I0815 16:30:06.374882    3848 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:30:06.374898    3848 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:30:06.375031    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:30:06.375135    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:30:06.375215    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:06.375302    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:06.375381    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:30:06.375501    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:30:06.375633    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0815 16:30:06.375641    3848 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0815 16:30:06.439797    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0815 16:30:06.439813    3848 buildroot.go:70] root file system type: tmpfs
	I0815 16:30:06.439885    3848 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0815 16:30:06.439896    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:30:06.440029    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:30:06.440119    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:06.440211    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:06.440322    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:30:06.440461    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:30:06.440594    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0815 16:30:06.440647    3848 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0815 16:30:06.516125    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0815 16:30:06.516150    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:30:06.516294    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:30:06.516408    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:06.516493    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:06.516594    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:30:06.516721    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:30:06.516850    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0815 16:30:06.516863    3848 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0815 16:30:08.163546    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0815 16:30:08.163562    3848 machine.go:96] duration metric: took 13.546346493s to provisionDockerMachine
	I0815 16:30:08.163573    3848 start.go:293] postStartSetup for "ha-138000" (driver="hyperkit")
	I0815 16:30:08.163581    3848 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 16:30:08.163591    3848 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:30:08.163828    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 16:30:08.163844    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:30:08.163938    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:30:08.164036    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:08.164139    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:30:08.164243    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/id_rsa Username:docker}
	I0815 16:30:08.204020    3848 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 16:30:08.207179    3848 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 16:30:08.207192    3848 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19452-977/.minikube/addons for local assets ...
	I0815 16:30:08.207302    3848 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19452-977/.minikube/files for local assets ...
	I0815 16:30:08.207487    3848 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> 14982.pem in /etc/ssl/certs
	I0815 16:30:08.207494    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> /etc/ssl/certs/14982.pem
	I0815 16:30:08.207699    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 16:30:08.215716    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem --> /etc/ssl/certs/14982.pem (1708 bytes)
	I0815 16:30:08.234526    3848 start.go:296] duration metric: took 70.944461ms for postStartSetup
	I0815 16:30:08.234554    3848 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:30:08.234725    3848 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0815 16:30:08.234737    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:30:08.234828    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:30:08.234919    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:08.235004    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:30:08.235082    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/id_rsa Username:docker}
	I0815 16:30:08.273169    3848 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0815 16:30:08.273225    3848 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0815 16:30:08.324608    3848 fix.go:56] duration metric: took 13.880521363s for fixHost
	I0815 16:30:08.324634    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:30:08.324763    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:30:08.324864    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:08.324958    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:08.325046    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:30:08.325174    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:30:08.325312    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0815 16:30:08.325319    3848 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 16:30:08.390142    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723764608.424079213
	
	I0815 16:30:08.390153    3848 fix.go:216] guest clock: 1723764608.424079213
	I0815 16:30:08.390158    3848 fix.go:229] Guest: 2024-08-15 16:30:08.424079213 -0700 PDT Remote: 2024-08-15 16:30:08.324621 -0700 PDT m=+14.326357489 (delta=99.458213ms)
	I0815 16:30:08.390181    3848 fix.go:200] guest clock delta is within tolerance: 99.458213ms
	I0815 16:30:08.390185    3848 start.go:83] releasing machines lock for "ha-138000", held for 13.946148575s
	I0815 16:30:08.390205    3848 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:30:08.390341    3848 main.go:141] libmachine: (ha-138000) Calling .GetIP
	I0815 16:30:08.390446    3848 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:30:08.390809    3848 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:30:08.390921    3848 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:30:08.390989    3848 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 16:30:08.391019    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:30:08.391075    3848 ssh_runner.go:195] Run: cat /version.json
	I0815 16:30:08.391087    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:30:08.391112    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:30:08.391203    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:30:08.391220    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:08.391315    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:30:08.391333    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:08.391411    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:30:08.391426    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/id_rsa Username:docker}
	I0815 16:30:08.391513    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/id_rsa Username:docker}
	I0815 16:30:08.423504    3848 ssh_runner.go:195] Run: systemctl --version
	I0815 16:30:08.428371    3848 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 16:30:08.479207    3848 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 16:30:08.479307    3848 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 16:30:08.492318    3848 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 16:30:08.492331    3848 start.go:495] detecting cgroup driver to use...
	I0815 16:30:08.492428    3848 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 16:30:08.510522    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0815 16:30:08.519382    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0815 16:30:08.528348    3848 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0815 16:30:08.528399    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0815 16:30:08.537505    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 16:30:08.546478    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0815 16:30:08.555462    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 16:30:08.564389    3848 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 16:30:08.573622    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0815 16:30:08.582698    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0815 16:30:08.591735    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0815 16:30:08.600760    3848 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 16:30:08.609049    3848 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 16:30:08.617235    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:30:08.722765    3848 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0815 16:30:08.746033    3848 start.go:495] detecting cgroup driver to use...
	I0815 16:30:08.746116    3848 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0815 16:30:08.759830    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 16:30:08.771599    3848 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 16:30:08.789529    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 16:30:08.802787    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0815 16:30:08.815377    3848 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0815 16:30:08.844257    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0815 16:30:08.860249    3848 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 16:30:08.875283    3848 ssh_runner.go:195] Run: which cri-dockerd
	I0815 16:30:08.878327    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0815 16:30:08.886411    3848 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0815 16:30:08.899899    3848 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0815 16:30:09.005084    3848 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0815 16:30:09.128876    3848 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0815 16:30:09.128948    3848 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0815 16:30:09.143602    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:30:09.247986    3848 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0815 16:30:11.515907    3848 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.267909782s)
	I0815 16:30:11.515971    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0815 16:30:11.526125    3848 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0815 16:30:11.539600    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0815 16:30:11.550726    3848 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0815 16:30:11.659005    3848 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0815 16:30:11.764312    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:30:11.871322    3848 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0815 16:30:11.884643    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0815 16:30:11.896838    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:30:12.002912    3848 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0815 16:30:12.062997    3848 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0815 16:30:12.063089    3848 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0815 16:30:12.067549    3848 start.go:563] Will wait 60s for crictl version
	I0815 16:30:12.067596    3848 ssh_runner.go:195] Run: which crictl
	I0815 16:30:12.070446    3848 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 16:30:12.096434    3848 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0815 16:30:12.096513    3848 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0815 16:30:12.116037    3848 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0815 16:30:12.178340    3848 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0815 16:30:12.178421    3848 main.go:141] libmachine: (ha-138000) Calling .GetIP
	I0815 16:30:12.178824    3848 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0815 16:30:12.183375    3848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 16:30:12.193025    3848 kubeadm.go:883] updating cluster {Name:ha-138000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:ha-138000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false f
reshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 16:30:12.193108    3848 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 16:30:12.193158    3848 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0815 16:30:12.206441    3848 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0815 16:30:12.206452    3848 docker.go:615] Images already preloaded, skipping extraction
	I0815 16:30:12.206519    3848 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0815 16:30:12.219546    3848 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0815 16:30:12.219565    3848 cache_images.go:84] Images are preloaded, skipping loading
	I0815 16:30:12.219576    3848 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.0 docker true true} ...
	I0815 16:30:12.219652    3848 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-138000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-138000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 16:30:12.219721    3848 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0815 16:30:12.258519    3848 cni.go:84] Creating CNI manager for ""
	I0815 16:30:12.258529    3848 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0815 16:30:12.258542    3848 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 16:30:12.258557    3848 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-138000 NodeName:ha-138000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 16:30:12.258636    3848 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-138000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 16:30:12.258649    3848 kube-vip.go:115] generating kube-vip config ...
	I0815 16:30:12.258696    3848 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0815 16:30:12.271337    3848 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0815 16:30:12.271407    3848 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0815 16:30:12.271468    3848 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 16:30:12.279197    3848 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 16:30:12.279243    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0815 16:30:12.286309    3848 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0815 16:30:12.299687    3848 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 16:30:12.313389    3848 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0815 16:30:12.327846    3848 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0815 16:30:12.341535    3848 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0815 16:30:12.344364    3848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 16:30:12.353627    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:30:12.452370    3848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 16:30:12.466830    3848 certs.go:68] Setting up /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000 for IP: 192.169.0.5
	I0815 16:30:12.466842    3848 certs.go:194] generating shared ca certs ...
	I0815 16:30:12.466852    3848 certs.go:226] acquiring lock for ca certs: {Name:mka1c88019f6064d2983ac988db71a67aeb65696 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:30:12.467038    3848 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.key
	I0815 16:30:12.467111    3848 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.key
	I0815 16:30:12.467121    3848 certs.go:256] generating profile certs ...
	I0815 16:30:12.467229    3848 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/client.key
	I0815 16:30:12.467304    3848 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.key.7af4c91a
	I0815 16:30:12.467369    3848 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.key
	I0815 16:30:12.467377    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0815 16:30:12.467397    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0815 16:30:12.467414    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0815 16:30:12.467432    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0815 16:30:12.467450    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0815 16:30:12.467479    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0815 16:30:12.467508    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0815 16:30:12.467527    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0815 16:30:12.467627    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498.pem (1338 bytes)
	W0815 16:30:12.467674    3848 certs.go:480] ignoring /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498_empty.pem, impossibly tiny 0 bytes
	I0815 16:30:12.467683    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 16:30:12.467721    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem (1082 bytes)
	I0815 16:30:12.467762    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem (1123 bytes)
	I0815 16:30:12.467793    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem (1679 bytes)
	I0815 16:30:12.467866    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem (1708 bytes)
	I0815 16:30:12.467898    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:30:12.467918    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498.pem -> /usr/share/ca-certificates/1498.pem
	I0815 16:30:12.467935    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> /usr/share/ca-certificates/14982.pem
	I0815 16:30:12.468350    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 16:30:12.503573    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 16:30:12.529609    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 16:30:12.555283    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 16:30:12.583638    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0815 16:30:12.612822    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 16:30:12.658082    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 16:30:12.709731    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 16:30:12.747480    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 16:30:12.797444    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498.pem --> /usr/share/ca-certificates/1498.pem (1338 bytes)
	I0815 16:30:12.830947    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem --> /usr/share/ca-certificates/14982.pem (1708 bytes)
	I0815 16:30:12.850811    3848 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 16:30:12.864245    3848 ssh_runner.go:195] Run: openssl version
	I0815 16:30:12.868404    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 16:30:12.876802    3848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:30:12.880151    3848 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:05 /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:30:12.880186    3848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:30:12.884283    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 16:30:12.892538    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1498.pem && ln -fs /usr/share/ca-certificates/1498.pem /etc/ssl/certs/1498.pem"
	I0815 16:30:12.900652    3848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1498.pem
	I0815 16:30:12.904017    3848 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 23:14 /usr/share/ca-certificates/1498.pem
	I0815 16:30:12.904050    3848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1498.pem
	I0815 16:30:12.908285    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1498.pem /etc/ssl/certs/51391683.0"
	I0815 16:30:12.916567    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14982.pem && ln -fs /usr/share/ca-certificates/14982.pem /etc/ssl/certs/14982.pem"
	I0815 16:30:12.924847    3848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14982.pem
	I0815 16:30:12.928159    3848 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 23:14 /usr/share/ca-certificates/14982.pem
	I0815 16:30:12.928193    3848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14982.pem
	I0815 16:30:12.932352    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14982.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 16:30:12.940679    3848 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 16:30:12.943953    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 16:30:12.948281    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 16:30:12.952498    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 16:30:12.956859    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 16:30:12.961066    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 16:30:12.965237    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 16:30:12.969424    3848 kubeadm.go:392] StartCluster: {Name:ha-138000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:ha-138000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 16:30:12.969537    3848 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0815 16:30:12.983217    3848 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 16:30:12.990985    3848 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 16:30:12.990998    3848 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 16:30:12.991037    3848 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 16:30:12.998611    3848 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 16:30:12.998906    3848 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-138000" does not appear in /Users/jenkins/minikube-integration/19452-977/kubeconfig
	I0815 16:30:12.998990    3848 kubeconfig.go:62] /Users/jenkins/minikube-integration/19452-977/kubeconfig needs updating (will repair): [kubeconfig missing "ha-138000" cluster setting kubeconfig missing "ha-138000" context setting]
	I0815 16:30:12.999150    3848 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-977/kubeconfig: {Name:mk0f04b0199f84dc20eb294d5b790451b41e43fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:30:12.999761    3848 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19452-977/kubeconfig
	I0815 16:30:12.999936    3848 kapi.go:59] client config for ha-138000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/client.key", CAFile:"/Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Use
rAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x3a6cf60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0815 16:30:13.000222    3848 cert_rotation.go:140] Starting client certificate rotation controller
	I0815 16:30:13.000394    3848 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 16:30:13.007927    3848 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0815 16:30:13.007944    3848 kubeadm.go:597] duration metric: took 16.941718ms to restartPrimaryControlPlane
	I0815 16:30:13.007950    3848 kubeadm.go:394] duration metric: took 38.534887ms to StartCluster
	I0815 16:30:13.007960    3848 settings.go:142] acquiring lock: {Name:mk694dad19d37394fa6b13c51a7dc54b62e97c12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:30:13.008036    3848 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19452-977/kubeconfig
	I0815 16:30:13.008396    3848 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-977/kubeconfig: {Name:mk0f04b0199f84dc20eb294d5b790451b41e43fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:30:13.008625    3848 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 16:30:13.008644    3848 start.go:241] waiting for startup goroutines ...
	I0815 16:30:13.008652    3848 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 16:30:13.008752    3848 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:30:13.052465    3848 out.go:177] * Enabled addons: 
	I0815 16:30:13.073695    3848 addons.go:510] duration metric: took 65.048594ms for enable addons: enabled=[]
	I0815 16:30:13.073733    3848 start.go:246] waiting for cluster config update ...
	I0815 16:30:13.073745    3848 start.go:255] writing updated cluster config ...
	I0815 16:30:13.095512    3848 out.go:201] 
	I0815 16:30:13.116951    3848 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:30:13.117068    3848 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/config.json ...
	I0815 16:30:13.139649    3848 out.go:177] * Starting "ha-138000-m02" control-plane node in "ha-138000" cluster
	I0815 16:30:13.181551    3848 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 16:30:13.181610    3848 cache.go:56] Caching tarball of preloaded images
	I0815 16:30:13.181807    3848 preload.go:172] Found /Users/jenkins/minikube-integration/19452-977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0815 16:30:13.181826    3848 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 16:30:13.181935    3848 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/config.json ...
	I0815 16:30:13.182895    3848 start.go:360] acquireMachinesLock for ha-138000-m02: {Name:mkac8034c8a60617271f2754a03dcaf74fc4fef7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 16:30:13.183018    3848 start.go:364] duration metric: took 98.069µs to acquireMachinesLock for "ha-138000-m02"
	I0815 16:30:13.183044    3848 start.go:96] Skipping create...Using existing machine configuration
	I0815 16:30:13.183051    3848 fix.go:54] fixHost starting: m02
	I0815 16:30:13.183444    3848 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:30:13.183470    3848 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:30:13.192973    3848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52251
	I0815 16:30:13.193340    3848 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:30:13.193664    3848 main.go:141] libmachine: Using API Version  1
	I0815 16:30:13.193677    3848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:30:13.193949    3848 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:30:13.194068    3848 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:30:13.194158    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetState
	I0815 16:30:13.194250    3848 main.go:141] libmachine: (ha-138000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:30:13.194330    3848 main.go:141] libmachine: (ha-138000-m02) DBG | hyperkit pid from json: 3670
	I0815 16:30:13.195266    3848 main.go:141] libmachine: (ha-138000-m02) DBG | hyperkit pid 3670 missing from process table
	I0815 16:30:13.195300    3848 fix.go:112] recreateIfNeeded on ha-138000-m02: state=Stopped err=<nil>
	I0815 16:30:13.195308    3848 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	W0815 16:30:13.195387    3848 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 16:30:13.216598    3848 out.go:177] * Restarting existing hyperkit VM for "ha-138000-m02" ...
	I0815 16:30:13.258591    3848 main.go:141] libmachine: (ha-138000-m02) Calling .Start
	I0815 16:30:13.258850    3848 main.go:141] libmachine: (ha-138000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:30:13.258951    3848 main.go:141] libmachine: (ha-138000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/hyperkit.pid
	I0815 16:30:13.260726    3848 main.go:141] libmachine: (ha-138000-m02) DBG | hyperkit pid 3670 missing from process table
	I0815 16:30:13.260746    3848 main.go:141] libmachine: (ha-138000-m02) DBG | pid 3670 is in state "Stopped"
	I0815 16:30:13.260762    3848 main.go:141] libmachine: (ha-138000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/hyperkit.pid...
	I0815 16:30:13.261090    3848 main.go:141] libmachine: (ha-138000-m02) DBG | Using UUID 4cff9b5a-9fe3-4215-9139-05f05b79bce3
	I0815 16:30:13.290755    3848 main.go:141] libmachine: (ha-138000-m02) DBG | Generated MAC 9a:c2:e9:d7:1c:58
	I0815 16:30:13.290775    3848 main.go:141] libmachine: (ha-138000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000
	I0815 16:30:13.290894    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"4cff9b5a-9fe3-4215-9139-05f05b79bce3", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003beae0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0815 16:30:13.290919    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"4cff9b5a-9fe3-4215-9139-05f05b79bce3", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003beae0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0815 16:30:13.290973    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "4cff9b5a-9fe3-4215-9139-05f05b79bce3", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/ha-138000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/tty,log=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/bzimage,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-13
8000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000"}
	I0815 16:30:13.291003    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 4cff9b5a-9fe3-4215-9139-05f05b79bce3 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/ha-138000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/tty,log=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/bzimage,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 co
nsole=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000"
	I0815 16:30:13.291039    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0815 16:30:13.292431    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 DEBUG: hyperkit: Pid is 4167
	I0815 16:30:13.292922    3848 main.go:141] libmachine: (ha-138000-m02) DBG | Attempt 0
	I0815 16:30:13.292931    3848 main.go:141] libmachine: (ha-138000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:30:13.292988    3848 main.go:141] libmachine: (ha-138000-m02) DBG | hyperkit pid from json: 4167
	I0815 16:30:13.294816    3848 main.go:141] libmachine: (ha-138000-m02) DBG | Searching for 9a:c2:e9:d7:1c:58 in /var/db/dhcpd_leases ...
	I0815 16:30:13.294866    3848 main.go:141] libmachine: (ha-138000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0815 16:30:13.294889    3848 main.go:141] libmachine: (ha-138000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:30:13.294903    3848 main.go:141] libmachine: (ha-138000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfdfcb}
	I0815 16:30:13.294915    3848 main.go:141] libmachine: (ha-138000-m02) DBG | Found match: 9a:c2:e9:d7:1c:58
	I0815 16:30:13.294931    3848 main.go:141] libmachine: (ha-138000-m02) DBG | IP: 192.169.0.6
	I0815 16:30:13.294997    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetConfigRaw
	I0815 16:30:13.295728    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetIP
	I0815 16:30:13.295920    3848 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/config.json ...
	I0815 16:30:13.296384    3848 machine.go:93] provisionDockerMachine start ...
	I0815 16:30:13.296394    3848 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:30:13.296516    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:30:13.296606    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:30:13.296695    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:13.296801    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:13.296905    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:30:13.297071    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:30:13.297242    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0815 16:30:13.297249    3848 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 16:30:13.300476    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0815 16:30:13.310276    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0815 16:30:13.311421    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0815 16:30:13.311448    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0815 16:30:13.311463    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0815 16:30:13.311475    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0815 16:30:13.698130    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0815 16:30:13.698145    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0815 16:30:13.812764    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0815 16:30:13.812785    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0815 16:30:13.812794    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0815 16:30:13.812888    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0815 16:30:13.813620    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0815 16:30:13.813637    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0815 16:30:19.405369    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:19 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0815 16:30:19.405428    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:19 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0815 16:30:19.405441    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:19 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0815 16:30:19.429063    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:19 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0815 16:30:24.364782    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 16:30:24.364794    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetMachineName
	I0815 16:30:24.364947    3848 buildroot.go:166] provisioning hostname "ha-138000-m02"
	I0815 16:30:24.364958    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetMachineName
	I0815 16:30:24.365057    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:30:24.365147    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:30:24.365238    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:24.365323    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:24.365453    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:30:24.365589    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:30:24.365741    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0815 16:30:24.365749    3848 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-138000-m02 && echo "ha-138000-m02" | sudo tee /etc/hostname
	I0815 16:30:24.435748    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-138000-m02
	
	I0815 16:30:24.435762    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:30:24.435893    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:30:24.435990    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:24.436082    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:24.436186    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:30:24.436313    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:30:24.436463    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0815 16:30:24.436475    3848 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-138000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-138000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-138000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 16:30:24.504475    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 16:30:24.504492    3848 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19452-977/.minikube CaCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19452-977/.minikube}
	I0815 16:30:24.504503    3848 buildroot.go:174] setting up certificates
	I0815 16:30:24.504519    3848 provision.go:84] configureAuth start
	I0815 16:30:24.504526    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetMachineName
	I0815 16:30:24.504663    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetIP
	I0815 16:30:24.504758    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:30:24.504846    3848 provision.go:143] copyHostCerts
	I0815 16:30:24.504877    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem
	I0815 16:30:24.504929    3848 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem, removing ...
	I0815 16:30:24.504935    3848 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem
	I0815 16:30:24.505124    3848 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem (1082 bytes)
	I0815 16:30:24.505339    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem
	I0815 16:30:24.505371    3848 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem, removing ...
	I0815 16:30:24.505375    3848 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem
	I0815 16:30:24.505446    3848 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem (1123 bytes)
	I0815 16:30:24.505596    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem
	I0815 16:30:24.505624    3848 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem, removing ...
	I0815 16:30:24.505628    3848 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem
	I0815 16:30:24.505696    3848 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem (1679 bytes)
	I0815 16:30:24.505845    3848 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem org=jenkins.ha-138000-m02 san=[127.0.0.1 192.169.0.6 ha-138000-m02 localhost minikube]
	I0815 16:30:24.669808    3848 provision.go:177] copyRemoteCerts
	I0815 16:30:24.669859    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 16:30:24.669875    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:30:24.670016    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:30:24.670138    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:24.670247    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:30:24.670341    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/id_rsa Username:docker}
	I0815 16:30:24.707125    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0815 16:30:24.707202    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 16:30:24.726013    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0815 16:30:24.726070    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0815 16:30:24.745370    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0815 16:30:24.745429    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 16:30:24.765407    3848 provision.go:87] duration metric: took 260.879651ms to configureAuth
	I0815 16:30:24.765419    3848 buildroot.go:189] setting minikube options for container-runtime
	I0815 16:30:24.765586    3848 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:30:24.765614    3848 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:30:24.765750    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:30:24.765841    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:30:24.765917    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:24.765992    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:24.766073    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:30:24.766180    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:30:24.766348    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0815 16:30:24.766356    3848 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0815 16:30:24.825444    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0815 16:30:24.825455    3848 buildroot.go:70] root file system type: tmpfs
	I0815 16:30:24.825535    3848 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0815 16:30:24.825546    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:30:24.825668    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:30:24.825761    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:24.825848    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:24.825931    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:30:24.826067    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:30:24.826205    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0815 16:30:24.826249    3848 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0815 16:30:24.894944    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0815 16:30:24.894961    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:30:24.895099    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:30:24.895204    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:24.895287    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:24.895382    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:30:24.895505    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:30:24.895640    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0815 16:30:24.895652    3848 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0815 16:30:26.552071    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0815 16:30:26.552086    3848 machine.go:96] duration metric: took 13.255738864s to provisionDockerMachine
	I0815 16:30:26.552093    3848 start.go:293] postStartSetup for "ha-138000-m02" (driver="hyperkit")
	I0815 16:30:26.552100    3848 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 16:30:26.552110    3848 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:30:26.552311    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 16:30:26.552326    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:30:26.552426    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:30:26.552517    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:26.552610    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:30:26.552712    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/id_rsa Username:docker}
	I0815 16:30:26.593353    3848 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 16:30:26.598425    3848 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 16:30:26.598438    3848 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19452-977/.minikube/addons for local assets ...
	I0815 16:30:26.598548    3848 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19452-977/.minikube/files for local assets ...
	I0815 16:30:26.598699    3848 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> 14982.pem in /etc/ssl/certs
	I0815 16:30:26.598705    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> /etc/ssl/certs/14982.pem
	I0815 16:30:26.598861    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 16:30:26.610066    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem --> /etc/ssl/certs/14982.pem (1708 bytes)
	I0815 16:30:26.645456    3848 start.go:296] duration metric: took 93.354607ms for postStartSetup
	I0815 16:30:26.645497    3848 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:30:26.645674    3848 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0815 16:30:26.645688    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:30:26.645776    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:30:26.645850    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:26.645933    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:30:26.646015    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/id_rsa Username:docker}
	I0815 16:30:26.683361    3848 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0815 16:30:26.683423    3848 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0815 16:30:26.737495    3848 fix.go:56] duration metric: took 13.554488062s for fixHost
	I0815 16:30:26.737525    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:30:26.737661    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:30:26.737749    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:26.737848    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:26.737943    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:30:26.738080    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:30:26.738216    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0815 16:30:26.738224    3848 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 16:30:26.796943    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723764627.049155775
	
	I0815 16:30:26.796953    3848 fix.go:216] guest clock: 1723764627.049155775
	I0815 16:30:26.796959    3848 fix.go:229] Guest: 2024-08-15 16:30:27.049155775 -0700 PDT Remote: 2024-08-15 16:30:26.737509 -0700 PDT m=+32.739307986 (delta=311.646775ms)
	I0815 16:30:26.796973    3848 fix.go:200] guest clock delta is within tolerance: 311.646775ms
	I0815 16:30:26.796977    3848 start.go:83] releasing machines lock for "ha-138000-m02", held for 13.613993837s
	I0815 16:30:26.796994    3848 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:30:26.797121    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetIP
	I0815 16:30:26.821561    3848 out.go:177] * Found network options:
	I0815 16:30:26.841357    3848 out.go:177]   - NO_PROXY=192.169.0.5
	W0815 16:30:26.862556    3848 proxy.go:119] fail to check proxy env: Error ip not in block
	I0815 16:30:26.862605    3848 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:30:26.863433    3848 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:30:26.863671    3848 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:30:26.863815    3848 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 16:30:26.863856    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	W0815 16:30:26.863902    3848 proxy.go:119] fail to check proxy env: Error ip not in block
	I0815 16:30:26.863997    3848 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0815 16:30:26.864019    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:30:26.864116    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:30:26.864226    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:30:26.864284    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:26.864479    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:30:26.864535    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:26.864691    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/id_rsa Username:docker}
	I0815 16:30:26.864752    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:30:26.864886    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/id_rsa Username:docker}
	W0815 16:30:26.897510    3848 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 16:30:26.897576    3848 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 16:30:26.944949    3848 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 16:30:26.944964    3848 start.go:495] detecting cgroup driver to use...
	I0815 16:30:26.945031    3848 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 16:30:26.959965    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0815 16:30:26.969052    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0815 16:30:26.977789    3848 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0815 16:30:26.977840    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0815 16:30:26.986870    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 16:30:26.995871    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0815 16:30:27.004811    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 16:30:27.013722    3848 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 16:30:27.022692    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0815 16:30:27.031569    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0815 16:30:27.040462    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0815 16:30:27.049386    3848 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 16:30:27.057419    3848 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 16:30:27.065508    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:30:27.164154    3848 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0815 16:30:27.181165    3848 start.go:495] detecting cgroup driver to use...
	I0815 16:30:27.181250    3848 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0815 16:30:27.192595    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 16:30:27.203037    3848 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 16:30:27.216573    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 16:30:27.228211    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0815 16:30:27.239268    3848 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0815 16:30:27.258656    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0815 16:30:27.269954    3848 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 16:30:27.284667    3848 ssh_runner.go:195] Run: which cri-dockerd
	I0815 16:30:27.287552    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0815 16:30:27.295653    3848 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0815 16:30:27.309091    3848 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0815 16:30:27.403676    3848 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0815 16:30:27.500434    3848 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0815 16:30:27.500464    3848 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0815 16:30:27.514754    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:30:27.610670    3848 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0815 16:30:29.951174    3848 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.340492876s)
	I0815 16:30:29.951241    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0815 16:30:29.961656    3848 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0815 16:30:29.974207    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0815 16:30:29.984718    3848 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0815 16:30:30.078933    3848 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0815 16:30:30.191991    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:30:30.301187    3848 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0815 16:30:30.314601    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0815 16:30:30.325440    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:30:30.420867    3848 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0815 16:30:30.486340    3848 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0815 16:30:30.486435    3848 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0815 16:30:30.491068    3848 start.go:563] Will wait 60s for crictl version
	I0815 16:30:30.491127    3848 ssh_runner.go:195] Run: which crictl
	I0815 16:30:30.494150    3848 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 16:30:30.523583    3848 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0815 16:30:30.523658    3848 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0815 16:30:30.541608    3848 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0815 16:30:30.598613    3848 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0815 16:30:30.658061    3848 out.go:177]   - env NO_PROXY=192.169.0.5
	I0815 16:30:30.695353    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetIP
	I0815 16:30:30.695714    3848 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0815 16:30:30.700361    3848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 16:30:30.709893    3848 mustload.go:65] Loading cluster: ha-138000
	I0815 16:30:30.710062    3848 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:30:30.710316    3848 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:30:30.710336    3848 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:30:30.719005    3848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52273
	I0815 16:30:30.719360    3848 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:30:30.719741    3848 main.go:141] libmachine: Using API Version  1
	I0815 16:30:30.719750    3848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:30:30.719981    3848 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:30:30.720103    3848 main.go:141] libmachine: (ha-138000) Calling .GetState
	I0815 16:30:30.720187    3848 main.go:141] libmachine: (ha-138000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:30:30.720267    3848 main.go:141] libmachine: (ha-138000) DBG | hyperkit pid from json: 3862
	I0815 16:30:30.721211    3848 host.go:66] Checking if "ha-138000" exists ...
	I0815 16:30:30.721471    3848 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:30:30.721491    3848 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:30:30.729999    3848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52275
	I0815 16:30:30.730336    3848 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:30:30.730678    3848 main.go:141] libmachine: Using API Version  1
	I0815 16:30:30.730693    3848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:30:30.730926    3848 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:30:30.731056    3848 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:30:30.731175    3848 certs.go:68] Setting up /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000 for IP: 192.169.0.6
	I0815 16:30:30.731181    3848 certs.go:194] generating shared ca certs ...
	I0815 16:30:30.731197    3848 certs.go:226] acquiring lock for ca certs: {Name:mka1c88019f6064d2983ac988db71a67aeb65696 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:30:30.731336    3848 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.key
	I0815 16:30:30.731387    3848 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.key
	I0815 16:30:30.731396    3848 certs.go:256] generating profile certs ...
	I0815 16:30:30.731509    3848 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/client.key
	I0815 16:30:30.731595    3848 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.key.5f0053a1
	I0815 16:30:30.731651    3848 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.key
	I0815 16:30:30.731658    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0815 16:30:30.731679    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0815 16:30:30.731700    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0815 16:30:30.731722    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0815 16:30:30.731740    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0815 16:30:30.731768    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0815 16:30:30.731791    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0815 16:30:30.731809    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0815 16:30:30.731883    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498.pem (1338 bytes)
	W0815 16:30:30.731920    3848 certs.go:480] ignoring /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498_empty.pem, impossibly tiny 0 bytes
	I0815 16:30:30.731928    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 16:30:30.731973    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem (1082 bytes)
	I0815 16:30:30.732017    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem (1123 bytes)
	I0815 16:30:30.732045    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem (1679 bytes)
	I0815 16:30:30.732121    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem (1708 bytes)
	I0815 16:30:30.732157    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> /usr/share/ca-certificates/14982.pem
	I0815 16:30:30.732177    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:30:30.732194    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498.pem -> /usr/share/ca-certificates/1498.pem
	I0815 16:30:30.732219    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:30:30.732316    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:30:30.732406    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:30.732529    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:30:30.732609    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/id_rsa Username:docker}
	I0815 16:30:30.763783    3848 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0815 16:30:30.767449    3848 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0815 16:30:30.776129    3848 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0815 16:30:30.779163    3848 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0815 16:30:30.787730    3848 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0815 16:30:30.791082    3848 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0815 16:30:30.799754    3848 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0815 16:30:30.802809    3848 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0815 16:30:30.811618    3848 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0815 16:30:30.814650    3848 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0815 16:30:30.822963    3848 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0815 16:30:30.826004    3848 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0815 16:30:30.834906    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 16:30:30.854912    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 16:30:30.874577    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 16:30:30.894388    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 16:30:30.914413    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0815 16:30:30.933887    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 16:30:30.953772    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 16:30:30.973419    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 16:30:30.992862    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem --> /usr/share/ca-certificates/14982.pem (1708 bytes)
	I0815 16:30:31.012391    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 16:30:31.031916    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498.pem --> /usr/share/ca-certificates/1498.pem (1338 bytes)
	I0815 16:30:31.051694    3848 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0815 16:30:31.065167    3848 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0815 16:30:31.078573    3848 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0815 16:30:31.091997    3848 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0815 16:30:31.105622    3848 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0815 16:30:31.119143    3848 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0815 16:30:31.132670    3848 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0815 16:30:31.146406    3848 ssh_runner.go:195] Run: openssl version
	I0815 16:30:31.150444    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 16:30:31.158651    3848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:30:31.162017    3848 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:05 /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:30:31.162055    3848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:30:31.166191    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 16:30:31.174561    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1498.pem && ln -fs /usr/share/ca-certificates/1498.pem /etc/ssl/certs/1498.pem"
	I0815 16:30:31.182745    3848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1498.pem
	I0815 16:30:31.186223    3848 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 23:14 /usr/share/ca-certificates/1498.pem
	I0815 16:30:31.186262    3848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1498.pem
	I0815 16:30:31.190437    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1498.pem /etc/ssl/certs/51391683.0"
	I0815 16:30:31.198642    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14982.pem && ln -fs /usr/share/ca-certificates/14982.pem /etc/ssl/certs/14982.pem"
	I0815 16:30:31.207129    3848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14982.pem
	I0815 16:30:31.210527    3848 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 23:14 /usr/share/ca-certificates/14982.pem
	I0815 16:30:31.210565    3848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14982.pem
	I0815 16:30:31.214780    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14982.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 16:30:31.223055    3848 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 16:30:31.226404    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 16:30:31.230624    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 16:30:31.234964    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 16:30:31.239281    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 16:30:31.243508    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 16:30:31.247740    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 16:30:31.251885    3848 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.31.0 docker true true} ...
	I0815 16:30:31.251948    3848 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-138000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-138000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 16:30:31.251968    3848 kube-vip.go:115] generating kube-vip config ...
	I0815 16:30:31.251997    3848 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0815 16:30:31.264157    3848 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0815 16:30:31.264200    3848 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0815 16:30:31.264247    3848 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 16:30:31.272799    3848 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 16:30:31.272844    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0815 16:30:31.280999    3848 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0815 16:30:31.294195    3848 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 16:30:31.307421    3848 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0815 16:30:31.321201    3848 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0815 16:30:31.324137    3848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 16:30:31.334188    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:30:31.429450    3848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 16:30:31.443961    3848 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 16:30:31.444161    3848 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:30:31.465375    3848 out.go:177] * Verifying Kubernetes components...
	I0815 16:30:31.507025    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:30:31.625968    3848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 16:30:31.645410    3848 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19452-977/kubeconfig
	I0815 16:30:31.645610    3848 kapi.go:59] client config for ha-138000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/client.key", CAFile:"/Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, U
serAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x3a6cf60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0815 16:30:31.645648    3848 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0815 16:30:31.645835    3848 node_ready.go:35] waiting up to 6m0s for node "ha-138000-m02" to be "Ready" ...
	I0815 16:30:31.645920    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:31.645925    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:31.645933    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:31.645936    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:40.053028    3848 round_trippers.go:574] Response Status: 200 OK in 8407 milliseconds
	I0815 16:30:40.053934    3848 node_ready.go:49] node "ha-138000-m02" has status "Ready":"True"
	I0815 16:30:40.053949    3848 node_ready.go:38] duration metric: took 8.408123647s for node "ha-138000-m02" to be "Ready" ...
	I0815 16:30:40.053959    3848 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 16:30:40.053997    3848 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0815 16:30:40.054008    3848 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0815 16:30:40.054051    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0815 16:30:40.054057    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:40.054064    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:40.054066    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:40.076049    3848 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0815 16:30:40.083485    3848 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-dmgt5" in "kube-system" namespace to be "Ready" ...
	I0815 16:30:40.083552    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-dmgt5
	I0815 16:30:40.083559    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:40.083565    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:40.083569    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:40.090478    3848 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0815 16:30:40.091010    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:40.091019    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:40.091025    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:40.091028    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:40.094713    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:40.095017    3848 pod_ready.go:93] pod "coredns-6f6b679f8f-dmgt5" in "kube-system" namespace has status "Ready":"True"
	I0815 16:30:40.095031    3848 pod_ready.go:82] duration metric: took 11.52447ms for pod "coredns-6f6b679f8f-dmgt5" in "kube-system" namespace to be "Ready" ...
	I0815 16:30:40.095040    3848 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-zc8jj" in "kube-system" namespace to be "Ready" ...
	I0815 16:30:40.095087    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-zc8jj
	I0815 16:30:40.095094    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:40.095102    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:40.095107    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:40.101746    3848 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0815 16:30:40.102483    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:40.102492    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:40.102500    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:40.102503    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:40.105983    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:40.106569    3848 pod_ready.go:93] pod "coredns-6f6b679f8f-zc8jj" in "kube-system" namespace has status "Ready":"True"
	I0815 16:30:40.106587    3848 pod_ready.go:82] duration metric: took 11.533246ms for pod "coredns-6f6b679f8f-zc8jj" in "kube-system" namespace to be "Ready" ...
	I0815 16:30:40.106595    3848 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:30:40.106638    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000
	I0815 16:30:40.106644    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:40.106651    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:40.106654    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:40.110887    3848 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 16:30:40.111881    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:40.111893    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:40.111902    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:40.111907    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:40.114794    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:40.115181    3848 pod_ready.go:93] pod "etcd-ha-138000" in "kube-system" namespace has status "Ready":"True"
	I0815 16:30:40.115194    3848 pod_ready.go:82] duration metric: took 8.594007ms for pod "etcd-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:30:40.115201    3848 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:30:40.115242    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m02
	I0815 16:30:40.115247    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:40.115252    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:40.115256    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:40.121257    3848 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 16:30:40.121684    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:40.121694    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:40.121704    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:40.121710    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:40.125990    3848 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 16:30:40.126507    3848 pod_ready.go:93] pod "etcd-ha-138000-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 16:30:40.126520    3848 pod_ready.go:82] duration metric: took 11.312949ms for pod "etcd-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:30:40.126528    3848 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:30:40.126573    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:30:40.126579    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:40.126585    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:40.126589    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:40.129916    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:40.254208    3848 request.go:632] Waited for 123.846339ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:30:40.254247    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:30:40.254252    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:40.254262    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:40.254299    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:40.258157    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:40.258510    3848 pod_ready.go:93] pod "etcd-ha-138000-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 16:30:40.258520    3848 pod_ready.go:82] duration metric: took 131.98589ms for pod "etcd-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:30:40.258532    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:30:40.454350    3848 request.go:632] Waited for 195.778452ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000
	I0815 16:30:40.454424    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000
	I0815 16:30:40.454430    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:40.454436    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:40.454441    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:40.457270    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:40.654210    3848 request.go:632] Waited for 196.49648ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:40.654247    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:40.654254    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:40.654300    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:40.654306    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:40.662420    3848 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0815 16:30:40.662780    3848 pod_ready.go:98] node "ha-138000" hosting pod "kube-apiserver-ha-138000" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-138000" has status "Ready":"False"
	I0815 16:30:40.662798    3848 pod_ready.go:82] duration metric: took 404.260054ms for pod "kube-apiserver-ha-138000" in "kube-system" namespace to be "Ready" ...
	E0815 16:30:40.662809    3848 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-138000" hosting pod "kube-apiserver-ha-138000" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-138000" has status "Ready":"False"
	I0815 16:30:40.662819    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:30:40.854147    3848 request.go:632] Waited for 191.277341ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:40.854226    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:40.854232    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:40.854238    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:40.854243    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:40.859631    3848 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 16:30:41.054463    3848 request.go:632] Waited for 194.266573ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:41.054497    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:41.054501    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:41.054509    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:41.054513    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:41.058210    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:41.254872    3848 request.go:632] Waited for 91.867207ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:41.254917    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:41.254966    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:41.254978    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:41.254982    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:41.258343    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:41.455877    3848 request.go:632] Waited for 196.977249ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:41.455912    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:41.455919    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:41.455925    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:41.455931    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:41.457855    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:41.664056    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:41.664082    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:41.664093    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:41.664100    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:41.667876    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:41.854208    3848 request.go:632] Waited for 185.493412ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:41.854247    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:41.854253    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:41.854260    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:41.854264    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:41.856823    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:42.163578    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:42.163664    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:42.163680    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:42.163716    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:42.167135    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:42.254205    3848 request.go:632] Waited for 86.267935ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:42.254261    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:42.254269    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:42.254286    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:42.254324    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:42.257709    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:42.664326    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:42.664344    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:42.664353    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:42.664357    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:42.666960    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:42.667548    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:42.667555    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:42.667561    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:42.667564    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:42.669222    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:42.669539    3848 pod_ready.go:103] pod "kube-apiserver-ha-138000-m02" in "kube-system" namespace has status "Ready":"False"
	I0815 16:30:43.163236    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:43.163273    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:43.163281    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:43.163286    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:43.165588    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:43.166081    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:43.166088    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:43.166094    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:43.166097    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:43.167727    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:43.663181    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:43.663266    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:43.663274    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:43.663277    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:43.665851    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:43.666288    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:43.666295    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:43.666301    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:43.666305    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:43.669495    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:44.163768    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:44.163782    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:44.163788    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:44.163800    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:44.166284    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:44.166820    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:44.166828    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:44.166834    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:44.166853    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:44.169173    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:44.663006    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:44.663018    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:44.663023    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:44.663025    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:44.665460    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:44.666145    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:44.666152    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:44.666158    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:44.666162    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:44.668246    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:45.164214    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:45.164237    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:45.164314    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:45.164325    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:45.167819    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:45.168514    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:45.168521    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:45.168528    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:45.168531    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:45.170434    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:45.170836    3848 pod_ready.go:103] pod "kube-apiserver-ha-138000-m02" in "kube-system" namespace has status "Ready":"False"
	I0815 16:30:45.665030    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:45.665056    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:45.665068    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:45.665073    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:45.668540    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:45.669128    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:45.669139    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:45.669148    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:45.669152    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:45.671055    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:46.163033    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:46.163095    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:46.163108    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:46.163116    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:46.166371    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:46.166786    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:46.166793    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:46.166799    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:46.166803    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:46.168600    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:46.663767    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:46.663791    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:46.663803    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:46.663814    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:46.667030    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:46.667614    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:46.667625    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:46.667633    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:46.667637    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:46.669233    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:47.163455    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:47.163469    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:47.163475    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:47.163480    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:47.167195    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:47.167557    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:47.167565    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:47.167571    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:47.167576    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:47.170814    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:47.171266    3848 pod_ready.go:103] pod "kube-apiserver-ha-138000-m02" in "kube-system" namespace has status "Ready":"False"
	I0815 16:30:47.663794    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:47.663820    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:47.663831    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:47.663839    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:47.667639    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:47.668283    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:47.668291    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:47.668297    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:47.668301    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:47.669950    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:48.164538    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:48.164559    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:48.164581    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:48.164603    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:48.168530    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:48.169233    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:48.169241    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:48.169248    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:48.169251    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:48.171274    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:48.663780    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:48.663804    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:48.663815    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:48.663821    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:48.667278    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:48.667837    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:48.667845    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:48.667851    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:48.667856    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:48.669518    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:49.165064    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:49.165087    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:49.165098    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:49.165104    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:49.168508    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:49.169206    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:49.169217    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:49.169225    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:49.169230    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:49.171198    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:49.171795    3848 pod_ready.go:103] pod "kube-apiserver-ha-138000-m02" in "kube-system" namespace has status "Ready":"False"
	I0815 16:30:49.663424    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:49.663448    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:49.663459    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:49.663467    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:49.667225    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:49.667697    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:49.667705    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:49.667711    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:49.667714    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:49.669376    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:50.164125    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:50.164149    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:50.164161    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:50.164166    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:50.167285    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:50.167810    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:50.167817    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:50.167823    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:50.167827    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:50.171799    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:50.663500    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:50.663525    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:50.663537    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:50.663543    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:50.667177    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:50.667713    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:50.667720    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:50.667726    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:50.667730    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:50.669352    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:51.164194    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:51.164219    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:51.164237    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:51.164244    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:51.167593    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:51.168246    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:51.168257    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:51.168264    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:51.168270    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:51.170524    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:51.664614    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:51.664638    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:51.664657    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:51.664665    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:51.668046    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:51.668566    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:51.668577    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:51.668585    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:51.668607    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:51.671534    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:51.671914    3848 pod_ready.go:103] pod "kube-apiserver-ha-138000-m02" in "kube-system" namespace has status "Ready":"False"
	I0815 16:30:52.164065    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:52.164089    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:52.164101    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:52.164110    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:52.167433    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:52.167935    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:52.167943    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:52.167948    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:52.167952    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:52.169540    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:52.169859    3848 pod_ready.go:93] pod "kube-apiserver-ha-138000-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 16:30:52.169869    3848 pod_ready.go:82] duration metric: took 11.507082407s for pod "kube-apiserver-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:30:52.169876    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:30:52.169910    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m03
	I0815 16:30:52.169915    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:52.169920    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:52.169923    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:52.171715    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:52.172141    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:30:52.172148    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:52.172154    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:52.172158    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:52.173532    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:52.173854    3848 pod_ready.go:93] pod "kube-apiserver-ha-138000-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 16:30:52.173863    3848 pod_ready.go:82] duration metric: took 3.981675ms for pod "kube-apiserver-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:30:52.173872    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:30:52.173900    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:30:52.173905    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:52.173911    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:52.173915    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:52.175518    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:52.175919    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:52.175926    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:52.175932    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:52.175936    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:52.177444    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:52.675197    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:30:52.675270    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:52.675284    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:52.675316    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:52.678186    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:52.678703    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:52.678711    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:52.678716    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:52.678719    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:52.680216    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:53.174971    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:30:53.174985    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:53.174994    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:53.175001    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:53.177452    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:53.177896    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:53.177903    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:53.177909    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:53.177912    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:53.179480    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:53.674788    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:30:53.674799    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:53.674806    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:53.674809    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:53.676873    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:53.677297    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:53.677305    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:53.677311    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:53.677315    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:53.678908    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:54.175897    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:30:54.175920    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:54.175937    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:54.175942    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:54.180021    3848 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 16:30:54.180479    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:54.180486    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:54.180492    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:54.180495    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:54.182351    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:54.182698    3848 pod_ready.go:103] pod "kube-controller-manager-ha-138000" in "kube-system" namespace has status "Ready":"False"
	I0815 16:30:54.674099    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:30:54.674113    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:54.674122    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:54.674126    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:54.676508    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:54.676959    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:54.676967    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:54.676973    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:54.676977    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:54.678531    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:55.174102    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:30:55.174117    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:55.174124    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:55.174129    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:55.176616    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:55.176978    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:55.176985    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:55.176991    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:55.176995    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:55.178804    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:55.675041    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:30:55.675073    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:55.675080    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:55.675083    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:55.677155    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:55.677606    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:55.677614    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:55.677620    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:55.677623    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:55.679257    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:56.174332    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:30:56.174347    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:56.174355    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:56.174360    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:56.176768    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:56.177182    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:56.177189    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:56.177194    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:56.177199    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:56.178739    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:56.674623    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:30:56.674644    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:56.674656    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:56.674663    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:56.678017    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:56.678729    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:56.678740    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:56.678748    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:56.678753    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:56.680396    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:56.680664    3848 pod_ready.go:103] pod "kube-controller-manager-ha-138000" in "kube-system" namespace has status "Ready":"False"
	I0815 16:30:57.174239    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:30:57.174259    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:57.174270    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:57.174276    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:57.176913    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:57.177317    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:57.177325    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:57.177330    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:57.177333    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:57.179089    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:57.674639    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:30:57.674650    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:57.674656    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:57.674660    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:57.676502    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:57.676984    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:57.676992    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:57.676997    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:57.677001    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:57.678477    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:58.174097    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:30:58.174117    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:58.174128    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:58.174136    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:58.177182    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:58.177563    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:58.177571    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:58.177575    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:58.177579    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:58.179304    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:58.675031    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:30:58.675045    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:58.675051    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:58.675055    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:58.680738    3848 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 16:30:58.682155    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:58.682163    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:58.682168    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:58.682171    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:58.686617    3848 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 16:30:58.686985    3848 pod_ready.go:103] pod "kube-controller-manager-ha-138000" in "kube-system" namespace has status "Ready":"False"
	I0815 16:30:59.174980    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:30:59.175006    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:59.175018    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:59.175023    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:59.178731    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:59.179314    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:59.179322    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:59.179328    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:59.179332    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:59.181206    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:59.674657    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:30:59.674670    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:59.674676    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:59.674679    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:59.676675    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:59.677055    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:59.677062    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:59.677069    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:59.677074    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:59.679271    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:00.174152    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:00.174175    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:00.174187    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:00.174194    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:00.177768    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:00.178234    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:00.178241    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:00.178247    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:00.178251    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:00.179906    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:00.675229    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:00.675240    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:00.675246    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:00.675250    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:00.677503    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:00.677966    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:00.677974    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:00.677979    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:00.677983    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:00.681462    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:01.174237    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:01.174258    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:01.174271    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:01.174278    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:01.177221    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:01.177958    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:01.177967    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:01.177973    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:01.177987    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:01.179870    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:01.180167    3848 pod_ready.go:103] pod "kube-controller-manager-ha-138000" in "kube-system" namespace has status "Ready":"False"
	I0815 16:31:01.674059    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:01.674071    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:01.674078    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:01.674082    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:01.678596    3848 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 16:31:01.679166    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:01.679175    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:01.679183    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:01.679203    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:01.681866    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:02.174721    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:02.174744    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:02.174757    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:02.174765    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:02.177936    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:02.178578    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:02.178585    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:02.178590    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:02.178593    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:02.180199    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:02.674480    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:02.674492    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:02.674498    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:02.674501    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:02.676574    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:02.677121    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:02.677129    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:02.677135    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:02.677138    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:02.678870    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:03.174993    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:03.175017    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:03.175028    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:03.175034    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:03.178103    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:03.178765    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:03.178773    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:03.178780    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:03.178783    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:03.180384    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:03.180717    3848 pod_ready.go:103] pod "kube-controller-manager-ha-138000" in "kube-system" namespace has status "Ready":"False"
	I0815 16:31:03.675885    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:03.675928    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:03.675935    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:03.675938    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:03.681610    3848 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 16:31:03.682165    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:03.682172    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:03.682178    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:03.682187    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:03.685681    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:04.173973    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:04.173985    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:04.173993    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:04.173996    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:04.176170    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:04.176622    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:04.176629    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:04.176635    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:04.176638    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:04.178918    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:04.674029    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:04.674041    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:04.674047    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:04.674051    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:04.676085    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:04.676616    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:04.676624    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:04.676629    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:04.676633    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:04.678653    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:05.174670    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:05.174682    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:05.174692    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:05.174696    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:05.176894    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:05.177444    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:05.177452    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:05.177458    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:05.177462    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:05.179988    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:05.673967    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:05.673984    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:05.673991    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:05.674005    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:05.676133    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:05.676616    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:05.676623    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:05.676629    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:05.676632    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:05.678220    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:05.678588    3848 pod_ready.go:103] pod "kube-controller-manager-ha-138000" in "kube-system" namespace has status "Ready":"False"
	I0815 16:31:06.174028    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:06.174040    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:06.174046    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:06.174049    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:06.176193    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:06.176556    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:06.176564    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:06.176570    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:06.176574    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:06.178240    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:06.674003    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:06.674018    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:06.674028    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:06.674032    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:06.676638    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:06.677110    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:06.677118    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:06.677124    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:06.677127    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:06.680025    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:07.175462    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:07.175477    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:07.175485    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:07.175489    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:07.178337    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:07.178886    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:07.178895    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:07.178900    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:07.178904    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:07.181117    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:07.674103    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:07.674115    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:07.674121    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:07.674125    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:07.676375    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:07.676766    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:07.676774    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:07.676780    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:07.676783    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:07.678622    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:07.678897    3848 pod_ready.go:103] pod "kube-controller-manager-ha-138000" in "kube-system" namespace has status "Ready":"False"
	I0815 16:31:08.174128    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:08.174151    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:08.174166    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:08.174203    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:08.177482    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:08.177896    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:08.177904    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:08.177909    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:08.177914    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:08.179348    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:08.674105    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:08.674132    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:08.674180    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:08.674191    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:08.677562    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:08.677981    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:08.677989    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:08.677994    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:08.677997    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:08.679564    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:09.174687    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:09.174712    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:09.174723    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:09.174728    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:09.177711    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:09.178141    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:09.178149    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:09.178155    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:09.178160    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:09.179715    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:09.675793    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:09.675810    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:09.675860    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:09.675867    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:09.681370    3848 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 16:31:09.681707    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:09.681714    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:09.681720    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:09.681724    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:09.683407    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:09.683668    3848 pod_ready.go:103] pod "kube-controller-manager-ha-138000" in "kube-system" namespace has status "Ready":"False"
	I0815 16:31:10.174082    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:10.174096    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:10.174104    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:10.174111    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:10.176432    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:10.176901    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:10.176909    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:10.176916    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:10.176919    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:10.178547    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:10.674143    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:10.674158    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:10.674166    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:10.674171    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:10.676827    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:10.677366    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:10.677374    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:10.677379    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:10.677398    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:10.679369    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:11.174015    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:11.174031    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:11.174039    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:11.174043    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:11.176194    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:11.176646    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:11.176655    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:11.176661    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:11.176664    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:11.178182    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:11.674088    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:11.674100    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:11.674107    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:11.674111    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:11.676722    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:11.677179    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:11.677186    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:11.677192    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:11.677197    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:11.679318    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:12.173967    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:12.173978    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:12.173983    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:12.173986    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:12.176395    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:12.176784    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:12.176792    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:12.176797    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:12.176799    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:12.178613    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:12.178965    3848 pod_ready.go:103] pod "kube-controller-manager-ha-138000" in "kube-system" namespace has status "Ready":"False"
	I0815 16:31:12.674752    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:12.674764    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:12.674771    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:12.674774    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:12.676796    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:12.677237    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:12.677244    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:12.677249    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:12.677254    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:12.678824    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:13.174235    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:13.174257    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:13.174269    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:13.174275    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:13.177507    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:13.177937    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:13.177945    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:13.177950    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:13.177958    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:13.179998    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:13.674842    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:13.674865    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:13.674920    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:13.674927    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:13.677347    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:13.677743    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:13.677750    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:13.677756    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:13.677760    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:13.679598    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:14.174511    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:14.174531    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:14.174543    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:14.174548    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:14.177242    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:14.177787    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:14.177794    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:14.177799    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:14.177804    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:14.179505    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:14.179846    3848 pod_ready.go:103] pod "kube-controller-manager-ha-138000" in "kube-system" namespace has status "Ready":"False"
	I0815 16:31:14.674978    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:14.674991    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:14.675000    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:14.675005    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:14.677126    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:14.677577    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:14.677584    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:14.677589    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:14.677592    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:14.679150    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:15.174111    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:15.174190    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:15.174206    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:15.174214    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:15.178180    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:15.178702    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:15.178709    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:15.178716    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:15.178720    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:15.180563    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:15.674161    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:15.674175    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:15.674181    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:15.674184    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:15.676320    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:15.676809    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:15.676817    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:15.676822    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:15.676826    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:15.678731    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:15.679179    3848 pod_ready.go:93] pod "kube-controller-manager-ha-138000" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:15.679188    3848 pod_ready.go:82] duration metric: took 23.505390371s for pod "kube-controller-manager-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:15.679194    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:15.679234    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000-m02
	I0815 16:31:15.679239    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:15.679244    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:15.679249    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:15.680973    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:15.681373    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:31:15.681379    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:15.681385    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:15.681389    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:15.683105    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:15.683478    3848 pod_ready.go:93] pod "kube-controller-manager-ha-138000-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:15.683487    3848 pod_ready.go:82] duration metric: took 4.286435ms for pod "kube-controller-manager-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:15.683493    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:15.683528    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000-m03
	I0815 16:31:15.683532    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:15.683538    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:15.683543    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:15.685040    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:15.685461    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:15.685469    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:15.685474    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:15.685478    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:15.687218    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:15.687628    3848 pod_ready.go:93] pod "kube-controller-manager-ha-138000-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:15.687636    3848 pod_ready.go:82] duration metric: took 4.137303ms for pod "kube-controller-manager-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:15.687642    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cznkn" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:15.687674    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cznkn
	I0815 16:31:15.687679    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:15.687685    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:15.687690    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:15.689397    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:15.689764    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:15.689771    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:15.689776    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:15.689787    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:15.691449    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:15.691750    3848 pod_ready.go:93] pod "kube-proxy-cznkn" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:15.691759    3848 pod_ready.go:82] duration metric: took 4.111581ms for pod "kube-proxy-cznkn" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:15.691765    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kxghx" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:15.691804    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kxghx
	I0815 16:31:15.691809    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:15.691815    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:15.691819    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:15.693452    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:15.693908    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:15.693915    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:15.693921    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:15.693924    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:15.695674    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:15.695946    3848 pod_ready.go:93] pod "kube-proxy-kxghx" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:15.695955    3848 pod_ready.go:82] duration metric: took 4.185821ms for pod "kube-proxy-kxghx" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:15.695961    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qpth7" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:15.875071    3848 request.go:632] Waited for 179.069493ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qpth7
	I0815 16:31:15.875187    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qpth7
	I0815 16:31:15.875199    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:15.875210    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:15.875216    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:15.877997    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:16.074238    3848 request.go:632] Waited for 195.764515ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m04
	I0815 16:31:16.074336    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m04
	I0815 16:31:16.074348    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:16.074360    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:16.074366    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:16.076828    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:16.077164    3848 pod_ready.go:93] pod "kube-proxy-qpth7" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:16.077173    3848 pod_ready.go:82] duration metric: took 381.20933ms for pod "kube-proxy-qpth7" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:16.077180    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tf79g" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:16.275150    3848 request.go:632] Waited for 197.922377ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tf79g
	I0815 16:31:16.275315    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tf79g
	I0815 16:31:16.275333    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:16.275348    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:16.275355    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:16.279230    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:16.474637    3848 request.go:632] Waited for 194.734989ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:31:16.474686    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:31:16.474694    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:16.474748    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:16.474760    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:16.477402    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:16.477913    3848 pod_ready.go:93] pod "kube-proxy-tf79g" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:16.477922    3848 pod_ready.go:82] duration metric: took 400.738709ms for pod "kube-proxy-tf79g" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:16.477928    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:16.674642    3848 request.go:632] Waited for 196.671207ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000
	I0815 16:31:16.674730    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000
	I0815 16:31:16.674740    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:16.674751    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:16.674791    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:16.677902    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:16.874216    3848 request.go:632] Waited for 195.903155ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:16.874296    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:16.874307    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:16.874318    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:16.874325    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:16.877076    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:16.877354    3848 pod_ready.go:93] pod "kube-scheduler-ha-138000" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:16.877362    3848 pod_ready.go:82] duration metric: took 399.431009ms for pod "kube-scheduler-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:16.877369    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:17.075600    3848 request.go:632] Waited for 198.191772ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000-m02
	I0815 16:31:17.075685    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000-m02
	I0815 16:31:17.075692    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:17.075697    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:17.075701    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:17.077601    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:17.275453    3848 request.go:632] Waited for 196.87369ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:31:17.275508    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:31:17.275516    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:17.275528    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:17.275536    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:17.278217    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:17.278748    3848 pod_ready.go:93] pod "kube-scheduler-ha-138000-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:17.278761    3848 pod_ready.go:82] duration metric: took 401.387065ms for pod "kube-scheduler-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:17.278778    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:17.474217    3848 request.go:632] Waited for 195.389302ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000-m03
	I0815 16:31:17.474330    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000-m03
	I0815 16:31:17.474342    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:17.474353    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:17.474361    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:17.477689    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:17.675623    3848 request.go:632] Waited for 197.469909ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:17.675688    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:17.675697    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:17.675705    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:17.675712    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:17.677994    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:17.678325    3848 pod_ready.go:93] pod "kube-scheduler-ha-138000-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:17.678335    3848 pod_ready.go:82] duration metric: took 399.551961ms for pod "kube-scheduler-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:17.678343    3848 pod_ready.go:39] duration metric: took 37.624501402s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 16:31:17.678361    3848 api_server.go:52] waiting for apiserver process to appear ...
	I0815 16:31:17.678422    3848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 16:31:17.692897    3848 api_server.go:72] duration metric: took 46.249064527s to wait for apiserver process to appear ...
	I0815 16:31:17.692911    3848 api_server.go:88] waiting for apiserver healthz status ...
	I0815 16:31:17.692928    3848 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0815 16:31:17.695957    3848 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0815 16:31:17.695990    3848 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0815 16:31:17.695994    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:17.696000    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:17.696004    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:17.696581    3848 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0815 16:31:17.696664    3848 api_server.go:141] control plane version: v1.31.0
	I0815 16:31:17.696676    3848 api_server.go:131] duration metric: took 3.760735ms to wait for apiserver health ...
	I0815 16:31:17.696684    3848 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 16:31:17.874475    3848 request.go:632] Waited for 177.745811ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0815 16:31:17.874542    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0815 16:31:17.874551    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:17.874608    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:17.874617    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:17.879453    3848 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 16:31:17.884757    3848 system_pods.go:59] 26 kube-system pods found
	I0815 16:31:17.884772    3848 system_pods.go:61] "coredns-6f6b679f8f-dmgt5" [47d73953-ec2c-4f17-b2b8-d6a9b5e5a316] Running
	I0815 16:31:17.884778    3848 system_pods.go:61] "coredns-6f6b679f8f-zc8jj" [b4a9df39-b09d-4bc3-97f6-b3176ff8e842] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 16:31:17.884783    3848 system_pods.go:61] "etcd-ha-138000" [c2e7e508-a157-4581-a446-2f48e51c0c16] Running
	I0815 16:31:17.884787    3848 system_pods.go:61] "etcd-ha-138000-m02" [4f371d3f-821b-4eae-ab52-5b75d90acd67] Running
	I0815 16:31:17.884791    3848 system_pods.go:61] "etcd-ha-138000-m03" [ebdd90b3-c3b2-44c2-a411-4f6afa67a455] Running
	I0815 16:31:17.884793    3848 system_pods.go:61] "kindnet-77dc6" [24bfc069-9446-43a7-aa61-308c1a62a20e] Running
	I0815 16:31:17.884796    3848 system_pods.go:61] "kindnet-dsvxt" [7645852b-8535-4cb4-8324-15c9b55176e3] Running
	I0815 16:31:17.884798    3848 system_pods.go:61] "kindnet-m887r" [ba31865b-c712-47a8-9fd8-06420270ac8b] Running
	I0815 16:31:17.884801    3848 system_pods.go:61] "kindnet-z6mnx" [fc6211a0-df99-4350-90ba-a18f74bd1bfc] Running
	I0815 16:31:17.884804    3848 system_pods.go:61] "kube-apiserver-ha-138000" [bbc684f7-d8e2-44f2-987a-df9d05cf54fc] Running
	I0815 16:31:17.884806    3848 system_pods.go:61] "kube-apiserver-ha-138000-m02" [c30e129b-f475-4131-a25e-7eeecb39cbea] Running
	I0815 16:31:17.884809    3848 system_pods.go:61] "kube-apiserver-ha-138000-m03" [90304f02-10f2-446d-b731-674df5602401] Running
	I0815 16:31:17.884811    3848 system_pods.go:61] "kube-controller-manager-ha-138000" [1d4148d1-9798-4662-91de-d9a7dae634e2] Running
	I0815 16:31:17.884814    3848 system_pods.go:61] "kube-controller-manager-ha-138000-m02" [ad693dae-cb0a-46ca-955b-33a9465d911a] Running
	I0815 16:31:17.884816    3848 system_pods.go:61] "kube-controller-manager-ha-138000-m03" [11aa509a-dade-4b66-b157-4d4cccb7f349] Running
	I0815 16:31:17.884819    3848 system_pods.go:61] "kube-proxy-cznkn" [61cd1a9a-80fb-4d0e-a4dd-f5ed2d6ddc0f] Running
	I0815 16:31:17.884821    3848 system_pods.go:61] "kube-proxy-kxghx" [27f61dcc-2812-4038-af2a-aa9f46033869] Running
	I0815 16:31:17.884823    3848 system_pods.go:61] "kube-proxy-qpth7" [a343f80b-0fe9-4c88-9782-5fbf9a6170d1] Running
	I0815 16:31:17.884826    3848 system_pods.go:61] "kube-proxy-tf79g" [ef8a2aeb-55c4-436e-a306-da8b93c9707f] Running
	I0815 16:31:17.884830    3848 system_pods.go:61] "kube-scheduler-ha-138000" [ee1d3eb0-596d-4bf0-bed4-dbd38b286e38] Running
	I0815 16:31:17.884832    3848 system_pods.go:61] "kube-scheduler-ha-138000-m02" [1cc29309-49ad-427e-862a-758cc5711eef] Running
	I0815 16:31:17.884835    3848 system_pods.go:61] "kube-scheduler-ha-138000-m03" [2e3efb3c-6903-4306-adec-d4a47b3a56bd] Running
	I0815 16:31:17.884837    3848 system_pods.go:61] "kube-vip-ha-138000" [1b8c66e5-ffaa-4d4b-a220-8eb664b79d91] Running
	I0815 16:31:17.884839    3848 system_pods.go:61] "kube-vip-ha-138000-m02" [a0fe2ba6-1ace-456f-ab2e-15c43303dcad] Running
	I0815 16:31:17.884841    3848 system_pods.go:61] "kube-vip-ha-138000-m03" [44fe2bd5-bd48-4f86-b38a-7e4b3554174c] Running
	I0815 16:31:17.884844    3848 system_pods.go:61] "storage-provisioner" [35f3a9d7-cb68-4b10-82a2-1bd72d8d1aa6] Running
	I0815 16:31:17.884847    3848 system_pods.go:74] duration metric: took 188.159351ms to wait for pod list to return data ...
	I0815 16:31:17.884852    3848 default_sa.go:34] waiting for default service account to be created ...
	I0815 16:31:18.074641    3848 request.go:632] Waited for 189.738485ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0815 16:31:18.074728    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0815 16:31:18.074738    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:18.074749    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:18.074756    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:18.078635    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:18.078759    3848 default_sa.go:45] found service account: "default"
	I0815 16:31:18.078768    3848 default_sa.go:55] duration metric: took 193.912663ms for default service account to be created ...
	I0815 16:31:18.078774    3848 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 16:31:18.274230    3848 request.go:632] Waited for 195.413402ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0815 16:31:18.274340    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0815 16:31:18.274351    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:18.274361    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:18.274369    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:18.279297    3848 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 16:31:18.284504    3848 system_pods.go:86] 26 kube-system pods found
	I0815 16:31:18.284515    3848 system_pods.go:89] "coredns-6f6b679f8f-dmgt5" [47d73953-ec2c-4f17-b2b8-d6a9b5e5a316] Running
	I0815 16:31:18.284521    3848 system_pods.go:89] "coredns-6f6b679f8f-zc8jj" [b4a9df39-b09d-4bc3-97f6-b3176ff8e842] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 16:31:18.284525    3848 system_pods.go:89] "etcd-ha-138000" [c2e7e508-a157-4581-a446-2f48e51c0c16] Running
	I0815 16:31:18.284530    3848 system_pods.go:89] "etcd-ha-138000-m02" [4f371d3f-821b-4eae-ab52-5b75d90acd67] Running
	I0815 16:31:18.284534    3848 system_pods.go:89] "etcd-ha-138000-m03" [ebdd90b3-c3b2-44c2-a411-4f6afa67a455] Running
	I0815 16:31:18.284537    3848 system_pods.go:89] "kindnet-77dc6" [24bfc069-9446-43a7-aa61-308c1a62a20e] Running
	I0815 16:31:18.284540    3848 system_pods.go:89] "kindnet-dsvxt" [7645852b-8535-4cb4-8324-15c9b55176e3] Running
	I0815 16:31:18.284543    3848 system_pods.go:89] "kindnet-m887r" [ba31865b-c712-47a8-9fd8-06420270ac8b] Running
	I0815 16:31:18.284545    3848 system_pods.go:89] "kindnet-z6mnx" [fc6211a0-df99-4350-90ba-a18f74bd1bfc] Running
	I0815 16:31:18.284550    3848 system_pods.go:89] "kube-apiserver-ha-138000" [bbc684f7-d8e2-44f2-987a-df9d05cf54fc] Running
	I0815 16:31:18.284554    3848 system_pods.go:89] "kube-apiserver-ha-138000-m02" [c30e129b-f475-4131-a25e-7eeecb39cbea] Running
	I0815 16:31:18.284557    3848 system_pods.go:89] "kube-apiserver-ha-138000-m03" [90304f02-10f2-446d-b731-674df5602401] Running
	I0815 16:31:18.284561    3848 system_pods.go:89] "kube-controller-manager-ha-138000" [1d4148d1-9798-4662-91de-d9a7dae634e2] Running
	I0815 16:31:18.284564    3848 system_pods.go:89] "kube-controller-manager-ha-138000-m02" [ad693dae-cb0a-46ca-955b-33a9465d911a] Running
	I0815 16:31:18.284567    3848 system_pods.go:89] "kube-controller-manager-ha-138000-m03" [11aa509a-dade-4b66-b157-4d4cccb7f349] Running
	I0815 16:31:18.284570    3848 system_pods.go:89] "kube-proxy-cznkn" [61cd1a9a-80fb-4d0e-a4dd-f5ed2d6ddc0f] Running
	I0815 16:31:18.284572    3848 system_pods.go:89] "kube-proxy-kxghx" [27f61dcc-2812-4038-af2a-aa9f46033869] Running
	I0815 16:31:18.284575    3848 system_pods.go:89] "kube-proxy-qpth7" [a343f80b-0fe9-4c88-9782-5fbf9a6170d1] Running
	I0815 16:31:18.284579    3848 system_pods.go:89] "kube-proxy-tf79g" [ef8a2aeb-55c4-436e-a306-da8b93c9707f] Running
	I0815 16:31:18.284582    3848 system_pods.go:89] "kube-scheduler-ha-138000" [ee1d3eb0-596d-4bf0-bed4-dbd38b286e38] Running
	I0815 16:31:18.284586    3848 system_pods.go:89] "kube-scheduler-ha-138000-m02" [1cc29309-49ad-427e-862a-758cc5711eef] Running
	I0815 16:31:18.284588    3848 system_pods.go:89] "kube-scheduler-ha-138000-m03" [2e3efb3c-6903-4306-adec-d4a47b3a56bd] Running
	I0815 16:31:18.284591    3848 system_pods.go:89] "kube-vip-ha-138000" [1b8c66e5-ffaa-4d4b-a220-8eb664b79d91] Running
	I0815 16:31:18.284594    3848 system_pods.go:89] "kube-vip-ha-138000-m02" [a0fe2ba6-1ace-456f-ab2e-15c43303dcad] Running
	I0815 16:31:18.284596    3848 system_pods.go:89] "kube-vip-ha-138000-m03" [44fe2bd5-bd48-4f86-b38a-7e4b3554174c] Running
	I0815 16:31:18.284599    3848 system_pods.go:89] "storage-provisioner" [35f3a9d7-cb68-4b10-82a2-1bd72d8d1aa6] Running
	I0815 16:31:18.284603    3848 system_pods.go:126] duration metric: took 205.826361ms to wait for k8s-apps to be running ...
	I0815 16:31:18.284609    3848 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 16:31:18.284679    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 16:31:18.296708    3848 system_svc.go:56] duration metric: took 12.095446ms WaitForService to wait for kubelet
	I0815 16:31:18.296724    3848 kubeadm.go:582] duration metric: took 46.852894704s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 16:31:18.296736    3848 node_conditions.go:102] verifying NodePressure condition ...
	I0815 16:31:18.474267    3848 request.go:632] Waited for 177.483283ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0815 16:31:18.474322    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0815 16:31:18.474330    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:18.474371    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:18.474392    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:18.477388    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:18.478383    3848 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 16:31:18.478396    3848 node_conditions.go:123] node cpu capacity is 2
	I0815 16:31:18.478405    3848 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 16:31:18.478408    3848 node_conditions.go:123] node cpu capacity is 2
	I0815 16:31:18.478412    3848 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 16:31:18.478415    3848 node_conditions.go:123] node cpu capacity is 2
	I0815 16:31:18.478418    3848 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 16:31:18.478423    3848 node_conditions.go:123] node cpu capacity is 2
	I0815 16:31:18.478427    3848 node_conditions.go:105] duration metric: took 181.688465ms to run NodePressure ...
	I0815 16:31:18.478434    3848 start.go:241] waiting for startup goroutines ...
	I0815 16:31:18.478453    3848 start.go:255] writing updated cluster config ...
	I0815 16:31:18.501967    3848 out.go:201] 
	I0815 16:31:18.522062    3848 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:31:18.522177    3848 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/config.json ...
	I0815 16:31:18.560022    3848 out.go:177] * Starting "ha-138000-m03" control-plane node in "ha-138000" cluster
	I0815 16:31:18.618077    3848 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 16:31:18.618104    3848 cache.go:56] Caching tarball of preloaded images
	I0815 16:31:18.618293    3848 preload.go:172] Found /Users/jenkins/minikube-integration/19452-977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0815 16:31:18.618310    3848 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 16:31:18.618409    3848 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/config.json ...
	I0815 16:31:18.619051    3848 start.go:360] acquireMachinesLock for ha-138000-m03: {Name:mkac8034c8a60617271f2754a03dcaf74fc4fef7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 16:31:18.619147    3848 start.go:364] duration metric: took 77.203µs to acquireMachinesLock for "ha-138000-m03"
	I0815 16:31:18.619166    3848 start.go:96] Skipping create...Using existing machine configuration
	I0815 16:31:18.619174    3848 fix.go:54] fixHost starting: m03
	I0815 16:31:18.619485    3848 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:31:18.619510    3848 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:31:18.628416    3848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52280
	I0815 16:31:18.628739    3848 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:31:18.629076    3848 main.go:141] libmachine: Using API Version  1
	I0815 16:31:18.629087    3848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:31:18.629285    3848 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:31:18.629412    3848 main.go:141] libmachine: (ha-138000-m03) Calling .DriverName
	I0815 16:31:18.629506    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetState
	I0815 16:31:18.629587    3848 main.go:141] libmachine: (ha-138000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:31:18.629688    3848 main.go:141] libmachine: (ha-138000-m03) DBG | hyperkit pid from json: 3119
	I0815 16:31:18.630594    3848 main.go:141] libmachine: (ha-138000-m03) DBG | hyperkit pid 3119 missing from process table
	I0815 16:31:18.630635    3848 fix.go:112] recreateIfNeeded on ha-138000-m03: state=Stopped err=<nil>
	I0815 16:31:18.630646    3848 main.go:141] libmachine: (ha-138000-m03) Calling .DriverName
	W0815 16:31:18.630738    3848 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 16:31:18.653953    3848 out.go:177] * Restarting existing hyperkit VM for "ha-138000-m03" ...
	I0815 16:31:18.711722    3848 main.go:141] libmachine: (ha-138000-m03) Calling .Start
	I0815 16:31:18.712041    3848 main.go:141] libmachine: (ha-138000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:31:18.712160    3848 main.go:141] libmachine: (ha-138000-m03) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/hyperkit.pid
	I0815 16:31:18.713734    3848 main.go:141] libmachine: (ha-138000-m03) DBG | hyperkit pid 3119 missing from process table
	I0815 16:31:18.713751    3848 main.go:141] libmachine: (ha-138000-m03) DBG | pid 3119 is in state "Stopped"
	I0815 16:31:18.713774    3848 main.go:141] libmachine: (ha-138000-m03) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/hyperkit.pid...
	I0815 16:31:18.713958    3848 main.go:141] libmachine: (ha-138000-m03) DBG | Using UUID 4228381e-4618-4b8b-ac7c-129bf380703a
	I0815 16:31:18.742338    3848 main.go:141] libmachine: (ha-138000-m03) DBG | Generated MAC 9e:18:89:2a:2d:99
	I0815 16:31:18.742370    3848 main.go:141] libmachine: (ha-138000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000
	I0815 16:31:18.742565    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:18 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"4228381e-4618-4b8b-ac7c-129bf380703a", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00037f470)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0815 16:31:18.742609    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:18 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"4228381e-4618-4b8b-ac7c-129bf380703a", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00037f470)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0815 16:31:18.742699    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:18 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "4228381e-4618-4b8b-ac7c-129bf380703a", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/ha-138000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/tty,log=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/bzimage,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-13
8000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000"}
	I0815 16:31:18.742751    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:18 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 4228381e-4618-4b8b-ac7c-129bf380703a -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/ha-138000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/tty,log=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/bzimage,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 co
nsole=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000"
	I0815 16:31:18.742790    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:18 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0815 16:31:18.744551    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:18 DEBUG: hyperkit: Pid is 4186
	I0815 16:31:18.745071    3848 main.go:141] libmachine: (ha-138000-m03) DBG | Attempt 0
	I0815 16:31:18.745087    3848 main.go:141] libmachine: (ha-138000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:31:18.745163    3848 main.go:141] libmachine: (ha-138000-m03) DBG | hyperkit pid from json: 4186
	I0815 16:31:18.746856    3848 main.go:141] libmachine: (ha-138000-m03) DBG | Searching for 9e:18:89:2a:2d:99 in /var/db/dhcpd_leases ...
	I0815 16:31:18.746937    3848 main.go:141] libmachine: (ha-138000-m03) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0815 16:31:18.746955    3848 main.go:141] libmachine: (ha-138000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:31:18.746980    3848 main.go:141] libmachine: (ha-138000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:31:18.746991    3848 main.go:141] libmachine: (ha-138000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be8e15}
	I0815 16:31:18.747032    3848 main.go:141] libmachine: (ha-138000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfdedc}
	I0815 16:31:18.747039    3848 main.go:141] libmachine: (ha-138000-m03) DBG | Found match: 9e:18:89:2a:2d:99
	I0815 16:31:18.747040    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetConfigRaw
	I0815 16:31:18.747045    3848 main.go:141] libmachine: (ha-138000-m03) DBG | IP: 192.169.0.7
	I0815 16:31:18.747774    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetIP
	I0815 16:31:18.747963    3848 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/config.json ...
	I0815 16:31:18.748524    3848 machine.go:93] provisionDockerMachine start ...
	I0815 16:31:18.748538    3848 main.go:141] libmachine: (ha-138000-m03) Calling .DriverName
	I0815 16:31:18.748670    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHHostname
	I0815 16:31:18.748765    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHPort
	I0815 16:31:18.748845    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:18.748950    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:18.749050    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHUsername
	I0815 16:31:18.749179    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:31:18.749325    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0815 16:31:18.749333    3848 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 16:31:18.752657    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:18 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0815 16:31:18.760833    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:18 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0815 16:31:18.761721    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0815 16:31:18.761738    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0815 16:31:18.761746    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0815 16:31:18.761755    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0815 16:31:19.145894    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:19 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0815 16:31:19.145910    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:19 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0815 16:31:19.260828    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0815 16:31:19.260843    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0815 16:31:19.260851    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0815 16:31:19.260862    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0815 16:31:19.261711    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:19 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0815 16:31:19.261721    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:19 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0815 16:31:24.888063    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:24 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0815 16:31:24.888137    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:24 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0815 16:31:24.888149    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:24 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0815 16:31:24.911372    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:24 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0815 16:31:29.819902    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 16:31:29.819917    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetMachineName
	I0815 16:31:29.820052    3848 buildroot.go:166] provisioning hostname "ha-138000-m03"
	I0815 16:31:29.820067    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetMachineName
	I0815 16:31:29.820174    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHHostname
	I0815 16:31:29.820268    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHPort
	I0815 16:31:29.820353    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:29.820429    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:29.820504    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHUsername
	I0815 16:31:29.820626    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:31:29.820777    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0815 16:31:29.820785    3848 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-138000-m03 && echo "ha-138000-m03" | sudo tee /etc/hostname
	I0815 16:31:29.898224    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-138000-m03
	
	I0815 16:31:29.898247    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHHostname
	I0815 16:31:29.898395    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHPort
	I0815 16:31:29.898481    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:29.898567    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:29.898654    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHUsername
	I0815 16:31:29.898789    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:31:29.898974    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0815 16:31:29.898986    3848 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-138000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-138000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-138000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 16:31:29.968919    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 16:31:29.968938    3848 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19452-977/.minikube CaCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19452-977/.minikube}
	I0815 16:31:29.968947    3848 buildroot.go:174] setting up certificates
	I0815 16:31:29.968952    3848 provision.go:84] configureAuth start
	I0815 16:31:29.968959    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetMachineName
	I0815 16:31:29.969088    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetIP
	I0815 16:31:29.969172    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHHostname
	I0815 16:31:29.969251    3848 provision.go:143] copyHostCerts
	I0815 16:31:29.969278    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem
	I0815 16:31:29.969343    3848 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem, removing ...
	I0815 16:31:29.969348    3848 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem
	I0815 16:31:29.969482    3848 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem (1082 bytes)
	I0815 16:31:29.969678    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem
	I0815 16:31:29.969716    3848 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem, removing ...
	I0815 16:31:29.969721    3848 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem
	I0815 16:31:29.969830    3848 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem (1123 bytes)
	I0815 16:31:29.969984    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem
	I0815 16:31:29.970023    3848 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem, removing ...
	I0815 16:31:29.970028    3848 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem
	I0815 16:31:29.970129    3848 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem (1679 bytes)
	I0815 16:31:29.970281    3848 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem org=jenkins.ha-138000-m03 san=[127.0.0.1 192.169.0.7 ha-138000-m03 localhost minikube]
	I0815 16:31:30.063220    3848 provision.go:177] copyRemoteCerts
	I0815 16:31:30.063270    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 16:31:30.063286    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHHostname
	I0815 16:31:30.063426    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHPort
	I0815 16:31:30.063510    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:30.063603    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHUsername
	I0815 16:31:30.063685    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/id_rsa Username:docker}
	I0815 16:31:30.101783    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0815 16:31:30.101861    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 16:31:30.121792    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0815 16:31:30.121868    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 16:31:30.141970    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0815 16:31:30.142077    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0815 16:31:30.161960    3848 provision.go:87] duration metric: took 192.993235ms to configureAuth
	I0815 16:31:30.161983    3848 buildroot.go:189] setting minikube options for container-runtime
	I0815 16:31:30.162167    3848 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:31:30.162199    3848 main.go:141] libmachine: (ha-138000-m03) Calling .DriverName
	I0815 16:31:30.162337    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHHostname
	I0815 16:31:30.162430    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHPort
	I0815 16:31:30.162521    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:30.162598    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:30.162675    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHUsername
	I0815 16:31:30.162784    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:31:30.162913    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0815 16:31:30.162921    3848 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0815 16:31:30.228685    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0815 16:31:30.228697    3848 buildroot.go:70] root file system type: tmpfs
	I0815 16:31:30.228781    3848 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0815 16:31:30.228793    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHHostname
	I0815 16:31:30.228929    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHPort
	I0815 16:31:30.229020    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:30.229108    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:30.229195    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHUsername
	I0815 16:31:30.229313    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:31:30.229444    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0815 16:31:30.229494    3848 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0815 16:31:30.305200    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0815 16:31:30.305217    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHHostname
	I0815 16:31:30.305352    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHPort
	I0815 16:31:30.305448    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:30.305543    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:30.305648    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHUsername
	I0815 16:31:30.305802    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:31:30.305948    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0815 16:31:30.305961    3848 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0815 16:31:31.969522    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0815 16:31:31.969536    3848 machine.go:96] duration metric: took 13.221047415s to provisionDockerMachine
	I0815 16:31:31.969548    3848 start.go:293] postStartSetup for "ha-138000-m03" (driver="hyperkit")
	I0815 16:31:31.969555    3848 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 16:31:31.969566    3848 main.go:141] libmachine: (ha-138000-m03) Calling .DriverName
	I0815 16:31:31.969757    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 16:31:31.969772    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHHostname
	I0815 16:31:31.969871    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHPort
	I0815 16:31:31.969976    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:31.970054    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHUsername
	I0815 16:31:31.970139    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/id_rsa Username:docker}
	I0815 16:31:32.013928    3848 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 16:31:32.017159    3848 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 16:31:32.017170    3848 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19452-977/.minikube/addons for local assets ...
	I0815 16:31:32.017274    3848 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19452-977/.minikube/files for local assets ...
	I0815 16:31:32.017462    3848 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> 14982.pem in /etc/ssl/certs
	I0815 16:31:32.017468    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> /etc/ssl/certs/14982.pem
	I0815 16:31:32.017677    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 16:31:32.029028    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem --> /etc/ssl/certs/14982.pem (1708 bytes)
	I0815 16:31:32.059130    3848 start.go:296] duration metric: took 89.573356ms for postStartSetup
	I0815 16:31:32.059162    3848 main.go:141] libmachine: (ha-138000-m03) Calling .DriverName
	I0815 16:31:32.059341    3848 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0815 16:31:32.059355    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHHostname
	I0815 16:31:32.059449    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHPort
	I0815 16:31:32.059534    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:32.059624    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHUsername
	I0815 16:31:32.059708    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/id_rsa Username:docker}
	I0815 16:31:32.098694    3848 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0815 16:31:32.098758    3848 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0815 16:31:32.152993    3848 fix.go:56] duration metric: took 13.533862474s for fixHost
	I0815 16:31:32.153017    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHHostname
	I0815 16:31:32.153168    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHPort
	I0815 16:31:32.153266    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:32.153360    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:32.153453    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHUsername
	I0815 16:31:32.153579    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:31:32.153719    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0815 16:31:32.153727    3848 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 16:31:32.220010    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723764692.474550074
	
	I0815 16:31:32.220026    3848 fix.go:216] guest clock: 1723764692.474550074
	I0815 16:31:32.220031    3848 fix.go:229] Guest: 2024-08-15 16:31:32.474550074 -0700 PDT Remote: 2024-08-15 16:31:32.153007 -0700 PDT m=+98.155027601 (delta=321.543074ms)
	I0815 16:31:32.220043    3848 fix.go:200] guest clock delta is within tolerance: 321.543074ms
	I0815 16:31:32.220047    3848 start.go:83] releasing machines lock for "ha-138000-m03", held for 13.600937599s
	I0815 16:31:32.220063    3848 main.go:141] libmachine: (ha-138000-m03) Calling .DriverName
	I0815 16:31:32.220193    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetIP
	I0815 16:31:32.242484    3848 out.go:177] * Found network options:
	I0815 16:31:32.262540    3848 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W0815 16:31:32.284750    3848 proxy.go:119] fail to check proxy env: Error ip not in block
	W0815 16:31:32.284780    3848 proxy.go:119] fail to check proxy env: Error ip not in block
	I0815 16:31:32.284808    3848 main.go:141] libmachine: (ha-138000-m03) Calling .DriverName
	I0815 16:31:32.285357    3848 main.go:141] libmachine: (ha-138000-m03) Calling .DriverName
	I0815 16:31:32.285486    3848 main.go:141] libmachine: (ha-138000-m03) Calling .DriverName
	I0815 16:31:32.285580    3848 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 16:31:32.285610    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHHostname
	W0815 16:31:32.285635    3848 proxy.go:119] fail to check proxy env: Error ip not in block
	W0815 16:31:32.285649    3848 proxy.go:119] fail to check proxy env: Error ip not in block
	I0815 16:31:32.285725    3848 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0815 16:31:32.285743    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHHostname
	I0815 16:31:32.285746    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHPort
	I0815 16:31:32.285912    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHPort
	I0815 16:31:32.285930    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:32.286051    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:32.286078    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHUsername
	I0815 16:31:32.286176    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHUsername
	I0815 16:31:32.286220    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/id_rsa Username:docker}
	I0815 16:31:32.286297    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/id_rsa Username:docker}
	W0815 16:31:32.322271    3848 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 16:31:32.322331    3848 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 16:31:32.369504    3848 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 16:31:32.369521    3848 start.go:495] detecting cgroup driver to use...
	I0815 16:31:32.369607    3848 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 16:31:32.385397    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0815 16:31:32.393793    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0815 16:31:32.401893    3848 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0815 16:31:32.401954    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0815 16:31:32.410021    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 16:31:32.418144    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0815 16:31:32.426371    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 16:31:32.434583    3848 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 16:31:32.442902    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0815 16:31:32.451254    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0815 16:31:32.459565    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0815 16:31:32.467863    3848 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 16:31:32.475226    3848 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 16:31:32.482724    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:31:32.583602    3848 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0815 16:31:32.603710    3848 start.go:495] detecting cgroup driver to use...
	I0815 16:31:32.603796    3848 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0815 16:31:32.620091    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 16:31:32.633248    3848 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 16:31:32.652532    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 16:31:32.666138    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0815 16:31:32.676424    3848 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0815 16:31:32.697061    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0815 16:31:32.707503    3848 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 16:31:32.722896    3848 ssh_runner.go:195] Run: which cri-dockerd
	I0815 16:31:32.725902    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0815 16:31:32.733526    3848 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0815 16:31:32.747908    3848 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0815 16:31:32.853084    3848 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0815 16:31:32.953384    3848 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0815 16:31:32.953408    3848 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0815 16:31:32.968013    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:31:33.073760    3848 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0815 16:31:35.380632    3848 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.306859581s)
	I0815 16:31:35.380695    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0815 16:31:35.391776    3848 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0815 16:31:35.404750    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0815 16:31:35.414823    3848 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0815 16:31:35.508250    3848 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0815 16:31:35.605930    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:31:35.720643    3848 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0815 16:31:35.734388    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0815 16:31:35.745523    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:31:35.849768    3848 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0815 16:31:35.916223    3848 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0815 16:31:35.916311    3848 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0815 16:31:35.920652    3848 start.go:563] Will wait 60s for crictl version
	I0815 16:31:35.920712    3848 ssh_runner.go:195] Run: which crictl
	I0815 16:31:35.923687    3848 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 16:31:35.951143    3848 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0815 16:31:35.951216    3848 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0815 16:31:35.970702    3848 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0815 16:31:36.011114    3848 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0815 16:31:36.053083    3848 out.go:177]   - env NO_PROXY=192.169.0.5
	I0815 16:31:36.074064    3848 out.go:177]   - env NO_PROXY=192.169.0.5,192.169.0.6
	I0815 16:31:36.094992    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetIP
	I0815 16:31:36.095254    3848 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0815 16:31:36.098563    3848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 16:31:36.107924    3848 mustload.go:65] Loading cluster: ha-138000
	I0815 16:31:36.108121    3848 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:31:36.108349    3848 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:31:36.108371    3848 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:31:36.117631    3848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52302
	I0815 16:31:36.118004    3848 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:31:36.118362    3848 main.go:141] libmachine: Using API Version  1
	I0815 16:31:36.118373    3848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:31:36.118572    3848 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:31:36.118683    3848 main.go:141] libmachine: (ha-138000) Calling .GetState
	I0815 16:31:36.118769    3848 main.go:141] libmachine: (ha-138000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:31:36.118858    3848 main.go:141] libmachine: (ha-138000) DBG | hyperkit pid from json: 3862
	I0815 16:31:36.119807    3848 host.go:66] Checking if "ha-138000" exists ...
	I0815 16:31:36.120056    3848 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:31:36.120079    3848 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:31:36.128888    3848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52304
	I0815 16:31:36.129245    3848 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:31:36.129613    3848 main.go:141] libmachine: Using API Version  1
	I0815 16:31:36.129628    3848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:31:36.129838    3848 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:31:36.129960    3848 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:31:36.130061    3848 certs.go:68] Setting up /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000 for IP: 192.169.0.7
	I0815 16:31:36.130067    3848 certs.go:194] generating shared ca certs ...
	I0815 16:31:36.130076    3848 certs.go:226] acquiring lock for ca certs: {Name:mka1c88019f6064d2983ac988db71a67aeb65696 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:31:36.130237    3848 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.key
	I0815 16:31:36.130321    3848 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.key
	I0815 16:31:36.130330    3848 certs.go:256] generating profile certs ...
	I0815 16:31:36.130443    3848 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/client.key
	I0815 16:31:36.130530    3848 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.key.c7e1c29f
	I0815 16:31:36.130604    3848 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.key
	I0815 16:31:36.130617    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0815 16:31:36.130638    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0815 16:31:36.130658    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0815 16:31:36.130676    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0815 16:31:36.130694    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0815 16:31:36.130735    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0815 16:31:36.130766    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0815 16:31:36.130785    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0815 16:31:36.130871    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498.pem (1338 bytes)
	W0815 16:31:36.130920    3848 certs.go:480] ignoring /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498_empty.pem, impossibly tiny 0 bytes
	I0815 16:31:36.130928    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 16:31:36.130977    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem (1082 bytes)
	I0815 16:31:36.131019    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem (1123 bytes)
	I0815 16:31:36.131050    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem (1679 bytes)
	I0815 16:31:36.131116    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem (1708 bytes)
	I0815 16:31:36.131153    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498.pem -> /usr/share/ca-certificates/1498.pem
	I0815 16:31:36.131174    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> /usr/share/ca-certificates/14982.pem
	I0815 16:31:36.131191    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:31:36.131214    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:31:36.131305    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:31:36.131384    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:31:36.131503    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:31:36.131582    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/id_rsa Username:docker}
	I0815 16:31:36.163135    3848 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0815 16:31:36.167195    3848 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0815 16:31:36.177598    3848 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0815 16:31:36.181380    3848 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0815 16:31:36.190596    3848 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0815 16:31:36.194001    3848 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0815 16:31:36.202689    3848 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0815 16:31:36.205906    3848 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0815 16:31:36.214386    3848 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0815 16:31:36.217472    3848 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0815 16:31:36.226235    3848 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0815 16:31:36.229561    3848 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0815 16:31:36.238534    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 16:31:36.259009    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 16:31:36.279081    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 16:31:36.299147    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 16:31:36.319142    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0815 16:31:36.339480    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 16:31:36.359157    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 16:31:36.379445    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 16:31:36.399731    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498.pem --> /usr/share/ca-certificates/1498.pem (1338 bytes)
	I0815 16:31:36.419506    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem --> /usr/share/ca-certificates/14982.pem (1708 bytes)
	I0815 16:31:36.439172    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 16:31:36.458742    3848 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0815 16:31:36.472323    3848 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0815 16:31:36.486349    3848 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0815 16:31:36.500064    3848 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0815 16:31:36.513680    3848 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0815 16:31:36.527778    3848 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0815 16:31:36.541967    3848 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0815 16:31:36.555903    3848 ssh_runner.go:195] Run: openssl version
	I0815 16:31:36.560554    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14982.pem && ln -fs /usr/share/ca-certificates/14982.pem /etc/ssl/certs/14982.pem"
	I0815 16:31:36.569772    3848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14982.pem
	I0815 16:31:36.573086    3848 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 23:14 /usr/share/ca-certificates/14982.pem
	I0815 16:31:36.573133    3848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14982.pem
	I0815 16:31:36.577434    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14982.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 16:31:36.585945    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 16:31:36.594481    3848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:31:36.598014    3848 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:05 /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:31:36.598056    3848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:31:36.602322    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 16:31:36.611545    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1498.pem && ln -fs /usr/share/ca-certificates/1498.pem /etc/ssl/certs/1498.pem"
	I0815 16:31:36.620267    3848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1498.pem
	I0815 16:31:36.623763    3848 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 23:14 /usr/share/ca-certificates/1498.pem
	I0815 16:31:36.623818    3848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1498.pem
	I0815 16:31:36.628404    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1498.pem /etc/ssl/certs/51391683.0"
	I0815 16:31:36.637260    3848 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 16:31:36.640760    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 16:31:36.645076    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 16:31:36.649285    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 16:31:36.653546    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 16:31:36.657801    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 16:31:36.662041    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 16:31:36.666218    3848 kubeadm.go:934] updating node {m03 192.169.0.7 8443 v1.31.0 docker true true} ...
	I0815 16:31:36.666285    3848 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-138000-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-138000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 16:31:36.666303    3848 kube-vip.go:115] generating kube-vip config ...
	I0815 16:31:36.666340    3848 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0815 16:31:36.678617    3848 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0815 16:31:36.678664    3848 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0815 16:31:36.678722    3848 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 16:31:36.686802    3848 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 16:31:36.686869    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0815 16:31:36.694600    3848 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0815 16:31:36.708358    3848 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 16:31:36.721865    3848 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0815 16:31:36.736604    3848 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0815 16:31:36.739496    3848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 16:31:36.748868    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:31:36.847387    3848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 16:31:36.862652    3848 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 16:31:36.862839    3848 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:31:36.884247    3848 out.go:177] * Verifying Kubernetes components...
	I0815 16:31:36.904597    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:31:37.032729    3848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 16:31:37.044674    3848 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19452-977/kubeconfig
	I0815 16:31:37.044869    3848 kapi.go:59] client config for ha-138000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/client.key", CAFile:"/Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, U
serAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x3a6cf60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0815 16:31:37.044913    3848 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0815 16:31:37.045078    3848 node_ready.go:35] waiting up to 6m0s for node "ha-138000-m03" to be "Ready" ...
	I0815 16:31:37.045127    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:37.045132    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:37.045138    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:37.045142    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:37.047558    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:37.545663    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:37.545719    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:37.545727    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:37.545756    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:37.548346    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:37.548775    3848 node_ready.go:49] node "ha-138000-m03" has status "Ready":"True"
	I0815 16:31:37.548786    3848 node_ready.go:38] duration metric: took 503.701087ms for node "ha-138000-m03" to be "Ready" ...
	I0815 16:31:37.548799    3848 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 16:31:37.548839    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0815 16:31:37.548848    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:37.548854    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:37.548859    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:37.555174    3848 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0815 16:31:37.561193    3848 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-dmgt5" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:37.561251    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-dmgt5
	I0815 16:31:37.561256    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:37.561262    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:37.561267    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:37.563487    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:37.564065    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:37.564072    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:37.564078    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:37.564081    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:37.566147    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:37.566458    3848 pod_ready.go:93] pod "coredns-6f6b679f8f-dmgt5" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:37.566468    3848 pod_ready.go:82] duration metric: took 5.259716ms for pod "coredns-6f6b679f8f-dmgt5" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:37.566475    3848 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-zc8jj" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:37.566514    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-zc8jj
	I0815 16:31:37.566519    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:37.566525    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:37.566529    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:37.568717    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:37.569347    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:37.569355    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:37.569361    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:37.569365    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:37.571508    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:37.571903    3848 pod_ready.go:93] pod "coredns-6f6b679f8f-zc8jj" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:37.571913    3848 pod_ready.go:82] duration metric: took 5.431792ms for pod "coredns-6f6b679f8f-zc8jj" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:37.571919    3848 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:37.571962    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000
	I0815 16:31:37.571967    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:37.571973    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:37.571976    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:37.574222    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:37.574650    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:37.574659    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:37.574665    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:37.574669    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:37.576917    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:37.577415    3848 pod_ready.go:93] pod "etcd-ha-138000" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:37.577426    3848 pod_ready.go:82] duration metric: took 5.501032ms for pod "etcd-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:37.577433    3848 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:37.577470    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m02
	I0815 16:31:37.577478    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:37.577485    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:37.577489    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:37.579610    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:37.580030    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:31:37.580038    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:37.580044    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:37.580049    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:37.582713    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:37.583250    3848 pod_ready.go:93] pod "etcd-ha-138000-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:37.583261    3848 pod_ready.go:82] duration metric: took 5.823471ms for pod "etcd-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:37.583269    3848 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:37.745749    3848 request.go:632] Waited for 162.439343ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:37.745806    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:37.745816    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:37.745824    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:37.745836    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:37.748134    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:37.945907    3848 request.go:632] Waited for 197.272516ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:37.945950    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:37.945956    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:37.945962    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:37.945966    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:37.948855    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:38.146195    3848 request.go:632] Waited for 62.814852ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:38.146243    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:38.146249    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:38.146296    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:38.146301    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:38.149137    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:38.346943    3848 request.go:632] Waited for 197.306674ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:38.346985    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:38.346994    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:38.347003    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:38.347010    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:38.349878    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:38.583459    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:38.583505    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:38.583514    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:38.583520    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:38.590031    3848 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0815 16:31:38.745745    3848 request.go:632] Waited for 155.336663ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:38.745818    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:38.745825    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:38.745831    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:38.745836    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:38.748530    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:39.083990    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:39.084003    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:39.084009    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:39.084013    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:39.086519    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:39.146468    3848 request.go:632] Waited for 59.248658ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:39.146510    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:39.146515    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:39.146521    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:39.146525    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:39.148504    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:39.583999    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:39.584017    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:39.584026    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:39.584029    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:39.589510    3848 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 16:31:39.590427    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:39.590438    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:39.590445    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:39.590449    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:39.592655    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:39.593056    3848 pod_ready.go:103] pod "etcd-ha-138000-m03" in "kube-system" namespace has status "Ready":"False"
	I0815 16:31:40.084185    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:40.084202    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:40.084209    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:40.084214    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:40.086419    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:40.087158    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:40.087166    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:40.087172    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:40.087196    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:40.088975    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:40.584037    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:40.584051    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:40.584058    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:40.584061    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:40.586450    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:40.586944    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:40.586952    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:40.586958    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:40.586963    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:40.589014    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:41.083405    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:41.083421    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:41.083427    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:41.083433    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:41.086228    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:41.086971    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:41.086978    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:41.086985    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:41.086990    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:41.097843    3848 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0815 16:31:41.583963    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:41.583987    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:41.583999    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:41.584008    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:41.587268    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:41.588066    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:41.588074    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:41.588079    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:41.588083    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:41.589716    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:42.083443    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:42.083462    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:42.083471    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:42.083482    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:42.085751    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:42.086179    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:42.086187    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:42.086194    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:42.086197    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:42.087825    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:42.088133    3848 pod_ready.go:103] pod "etcd-ha-138000-m03" in "kube-system" namespace has status "Ready":"False"
	I0815 16:31:42.584042    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:42.584070    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:42.584081    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:42.584089    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:42.587530    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:42.588287    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:42.588295    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:42.588301    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:42.588305    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:42.589868    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:43.085149    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:43.085164    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:43.085170    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:43.085174    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:43.087319    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:43.087818    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:43.087825    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:43.087831    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:43.087834    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:43.089562    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:43.583720    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:43.583737    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:43.583744    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:43.583747    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:43.586238    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:43.586831    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:43.586842    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:43.586849    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:43.586852    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:43.589092    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:44.084178    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:44.084189    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:44.084195    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:44.084198    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:44.086364    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:44.086790    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:44.086798    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:44.086805    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:44.086809    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:44.088812    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:44.089107    3848 pod_ready.go:103] pod "etcd-ha-138000-m03" in "kube-system" namespace has status "Ready":"False"
	I0815 16:31:44.584718    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:44.584743    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:44.584755    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:44.584763    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:44.587851    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:44.588606    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:44.588615    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:44.588621    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:44.588624    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:44.590403    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:45.083471    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:45.083486    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:45.083492    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:45.083496    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:45.085722    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:45.086170    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:45.086177    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:45.086186    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:45.086189    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:45.087992    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:45.583684    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:45.583761    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:45.583775    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:45.583782    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:45.586696    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:45.587281    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:45.587292    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:45.587300    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:45.587305    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:45.588851    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:46.083567    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:46.083581    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:46.083590    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:46.083595    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:46.086254    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:46.086706    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:46.086714    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:46.086720    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:46.086724    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:46.088505    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:46.583431    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:46.583454    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:46.583474    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:46.583477    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:46.586641    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:46.587367    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:46.587376    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:46.587383    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:46.587389    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:46.590271    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:46.590924    3848 pod_ready.go:103] pod "etcd-ha-138000-m03" in "kube-system" namespace has status "Ready":"False"
	I0815 16:31:47.085070    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:47.085088    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:47.085094    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:47.085097    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:47.087411    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:47.087834    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:47.087841    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:47.087847    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:47.087856    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:47.089857    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:47.583460    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:47.583510    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:47.583537    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:47.583547    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:47.586412    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:47.587147    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:47.587155    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:47.587161    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:47.587164    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:47.589077    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:48.084130    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:48.084172    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:48.084180    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:48.084184    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:48.086241    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:48.086700    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:48.086708    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:48.086715    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:48.086719    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:48.088392    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:48.583712    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:48.583726    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:48.583733    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:48.583736    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:48.585950    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:48.586404    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:48.586411    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:48.586417    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:48.586420    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:48.588064    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:49.084795    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:49.084810    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:49.084817    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:49.084821    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:49.087201    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:49.087638    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:49.087646    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:49.087651    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:49.087655    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:49.089294    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:49.089762    3848 pod_ready.go:103] pod "etcd-ha-138000-m03" in "kube-system" namespace has status "Ready":"False"
	I0815 16:31:49.584532    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:49.584586    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:49.584596    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:49.584602    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:49.586828    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:49.587368    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:49.587376    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:49.587381    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:49.587386    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:49.589092    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:50.084677    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:50.084702    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:50.084714    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:50.084720    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:50.090233    3848 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 16:31:50.091082    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:50.091090    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:50.091095    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:50.091098    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:50.093397    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:50.584557    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:50.584594    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:50.584607    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:50.584614    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:50.587331    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:50.588105    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:50.588113    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:50.588119    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:50.588122    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:50.589783    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:51.084222    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:51.084238    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:51.084245    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:51.084249    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:51.086498    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:51.086853    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:51.086860    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:51.086866    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:51.086869    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:51.088548    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:51.583648    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:51.583662    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:51.583669    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:51.583673    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:51.585837    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:51.586356    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:51.586364    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:51.586370    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:51.586374    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:51.588027    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:51.588324    3848 pod_ready.go:103] pod "etcd-ha-138000-m03" in "kube-system" namespace has status "Ready":"False"
	I0815 16:31:52.083439    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:52.083464    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:52.083477    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:52.083486    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:52.086839    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:52.087326    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:52.087334    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:52.087340    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:52.087344    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:52.089021    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:52.089421    3848 pod_ready.go:93] pod "etcd-ha-138000-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:52.089431    3848 pod_ready.go:82] duration metric: took 14.506206257s for pod "etcd-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:52.089443    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:52.089476    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000
	I0815 16:31:52.089481    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:52.089487    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:52.089490    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:52.091044    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:52.091506    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:52.091513    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:52.091519    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:52.091522    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:52.093067    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:52.093523    3848 pod_ready.go:93] pod "kube-apiserver-ha-138000" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:52.093534    3848 pod_ready.go:82] duration metric: took 4.083615ms for pod "kube-apiserver-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:52.093540    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:52.093569    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:31:52.093574    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:52.093579    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:52.093583    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:52.096079    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:52.096682    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:31:52.096689    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:52.096695    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:52.096698    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:52.098629    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:52.099014    3848 pod_ready.go:93] pod "kube-apiserver-ha-138000-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:52.099023    3848 pod_ready.go:82] duration metric: took 5.477344ms for pod "kube-apiserver-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:52.099030    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:52.099060    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m03
	I0815 16:31:52.099065    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:52.099071    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:52.099075    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:52.100773    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:52.101171    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:52.101178    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:52.101184    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:52.101188    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:52.108504    3848 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0815 16:31:52.599355    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m03
	I0815 16:31:52.599371    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:52.599378    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:52.599380    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:52.603474    3848 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 16:31:52.603827    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:52.603834    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:52.603839    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:52.603842    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:52.607400    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:53.100426    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m03
	I0815 16:31:53.100452    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:53.100465    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:53.100469    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:53.103591    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:53.103977    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:53.103985    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:53.103991    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:53.103995    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:53.105550    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:53.600030    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m03
	I0815 16:31:53.600056    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:53.600098    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:53.600106    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:53.603820    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:53.604279    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:53.604287    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:53.604292    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:53.604302    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:53.605948    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:54.100215    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m03
	I0815 16:31:54.100240    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:54.100248    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:54.100254    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:54.103639    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:54.104211    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:54.104222    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:54.104230    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:54.104236    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:54.106285    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:54.106596    3848 pod_ready.go:103] pod "kube-apiserver-ha-138000-m03" in "kube-system" namespace has status "Ready":"False"
	I0815 16:31:54.600238    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m03
	I0815 16:31:54.600262    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:54.600275    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:54.600280    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:54.603528    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:54.604248    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:54.604259    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:54.604268    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:54.604276    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:54.606261    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:54.606605    3848 pod_ready.go:93] pod "kube-apiserver-ha-138000-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:54.606614    3848 pod_ready.go:82] duration metric: took 2.507587207s for pod "kube-apiserver-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:54.606621    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:54.606652    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:54.606657    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:54.606663    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:54.606677    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:54.608196    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:54.608645    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:54.608652    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:54.608658    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:54.608661    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:54.610174    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:54.610543    3848 pod_ready.go:93] pod "kube-controller-manager-ha-138000" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:54.610551    3848 pod_ready.go:82] duration metric: took 3.924647ms for pod "kube-controller-manager-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:54.610565    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:54.610597    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000-m02
	I0815 16:31:54.610601    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:54.610607    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:54.610611    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:54.612220    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:54.612637    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:31:54.612644    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:54.612648    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:54.612652    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:54.614115    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:54.614453    3848 pod_ready.go:93] pod "kube-controller-manager-ha-138000-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:54.614461    3848 pod_ready.go:82] duration metric: took 3.890604ms for pod "kube-controller-manager-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:54.614467    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:54.685393    3848 request.go:632] Waited for 70.886034ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000-m03
	I0815 16:31:54.685542    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000-m03
	I0815 16:31:54.685554    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:54.685565    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:54.685572    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:54.689462    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:54.884047    3848 request.go:632] Waited for 194.079873ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:54.884179    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:54.884194    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:54.884206    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:54.884216    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:54.887378    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:54.887638    3848 pod_ready.go:93] pod "kube-controller-manager-ha-138000-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:54.887648    3848 pod_ready.go:82] duration metric: took 273.176916ms for pod "kube-controller-manager-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:54.887655    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cznkn" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:55.084696    3848 request.go:632] Waited for 197.006461ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cznkn
	I0815 16:31:55.084754    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cznkn
	I0815 16:31:55.084760    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:55.084766    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:55.084770    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:55.086486    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:55.284932    3848 request.go:632] Waited for 198.019424ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:55.285014    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:55.285023    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:55.285031    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:55.285034    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:55.287587    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:55.288003    3848 pod_ready.go:93] pod "kube-proxy-cznkn" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:55.288012    3848 pod_ready.go:82] duration metric: took 400.352996ms for pod "kube-proxy-cznkn" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:55.288019    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kxghx" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:55.484813    3848 request.go:632] Waited for 196.749045ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kxghx
	I0815 16:31:55.484909    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kxghx
	I0815 16:31:55.484933    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:55.484946    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:55.484952    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:55.487936    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:55.684903    3848 request.go:632] Waited for 196.468256ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:55.684989    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:55.684999    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:55.685010    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:55.685019    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:55.688164    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:55.688606    3848 pod_ready.go:93] pod "kube-proxy-kxghx" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:55.688619    3848 pod_ready.go:82] duration metric: took 400.595564ms for pod "kube-proxy-kxghx" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:55.688628    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qpth7" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:55.884647    3848 request.go:632] Waited for 195.972571ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qpth7
	I0815 16:31:55.884703    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qpth7
	I0815 16:31:55.884734    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:55.884828    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:55.884842    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:55.887780    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:56.085059    3848 request.go:632] Waited for 196.76753ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m04
	I0815 16:31:56.085155    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m04
	I0815 16:31:56.085166    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:56.085178    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:56.085187    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:56.088438    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:56.088843    3848 pod_ready.go:98] node "ha-138000-m04" hosting pod "kube-proxy-qpth7" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-138000-m04" has status "Ready":"Unknown"
	I0815 16:31:56.088858    3848 pod_ready.go:82] duration metric: took 400.224535ms for pod "kube-proxy-qpth7" in "kube-system" namespace to be "Ready" ...
	E0815 16:31:56.088867    3848 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-138000-m04" hosting pod "kube-proxy-qpth7" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-138000-m04" has status "Ready":"Unknown"
	I0815 16:31:56.088873    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tf79g" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:56.284412    3848 request.go:632] Waited for 195.467169ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tf79g
	I0815 16:31:56.284533    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tf79g
	I0815 16:31:56.284544    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:56.284556    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:56.284567    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:56.287997    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:56.483641    3848 request.go:632] Waited for 195.132786ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:31:56.483717    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:31:56.483778    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:56.483801    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:56.483810    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:56.486922    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:56.487377    3848 pod_ready.go:93] pod "kube-proxy-tf79g" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:56.487387    3848 pod_ready.go:82] duration metric: took 398.50917ms for pod "kube-proxy-tf79g" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:56.487394    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:56.684509    3848 request.go:632] Waited for 197.075187ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000
	I0815 16:31:56.684584    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000
	I0815 16:31:56.684592    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:56.684600    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:56.684606    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:56.687177    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:56.884267    3848 request.go:632] Waited for 196.705982ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:56.884375    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:56.884384    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:56.884392    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:56.884396    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:56.886486    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:56.886846    3848 pod_ready.go:93] pod "kube-scheduler-ha-138000" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:56.886854    3848 pod_ready.go:82] duration metric: took 399.455831ms for pod "kube-scheduler-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:56.886860    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:57.083869    3848 request.go:632] Waited for 196.961301ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000-m02
	I0815 16:31:57.083950    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000-m02
	I0815 16:31:57.083960    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:57.083983    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:57.083992    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:57.087081    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:57.285517    3848 request.go:632] Waited for 197.962246ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:31:57.285639    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:31:57.285649    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:57.285659    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:57.285667    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:57.288947    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:57.289317    3848 pod_ready.go:93] pod "kube-scheduler-ha-138000-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:57.289331    3848 pod_ready.go:82] duration metric: took 402.465658ms for pod "kube-scheduler-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:57.289340    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:57.483919    3848 request.go:632] Waited for 194.531212ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000-m03
	I0815 16:31:57.484018    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000-m03
	I0815 16:31:57.484029    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:57.484041    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:57.484049    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:57.486736    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:57.683533    3848 request.go:632] Waited for 196.372817ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:57.683619    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:57.683630    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:57.683642    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:57.683649    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:57.686767    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:57.687131    3848 pod_ready.go:93] pod "kube-scheduler-ha-138000-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:57.687146    3848 pod_ready.go:82] duration metric: took 397.799248ms for pod "kube-scheduler-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:57.687155    3848 pod_ready.go:39] duration metric: took 20.138416099s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 16:31:57.687170    3848 api_server.go:52] waiting for apiserver process to appear ...
	I0815 16:31:57.687237    3848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 16:31:57.700597    3848 api_server.go:72] duration metric: took 20.837986375s to wait for apiserver process to appear ...
	I0815 16:31:57.700610    3848 api_server.go:88] waiting for apiserver healthz status ...
	I0815 16:31:57.700622    3848 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0815 16:31:57.703621    3848 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0815 16:31:57.703653    3848 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0815 16:31:57.703658    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:57.703664    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:57.703670    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:57.704168    3848 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0815 16:31:57.704198    3848 api_server.go:141] control plane version: v1.31.0
	I0815 16:31:57.704207    3848 api_server.go:131] duration metric: took 3.590796ms to wait for apiserver health ...
	I0815 16:31:57.704213    3848 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 16:31:57.884532    3848 request.go:632] Waited for 180.27549ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0815 16:31:57.884634    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0815 16:31:57.884645    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:57.884656    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:57.884661    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:57.889257    3848 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 16:31:57.894492    3848 system_pods.go:59] 26 kube-system pods found
	I0815 16:31:57.894504    3848 system_pods.go:61] "coredns-6f6b679f8f-dmgt5" [47d73953-ec2c-4f17-b2b8-d6a9b5e5a316] Running
	I0815 16:31:57.894508    3848 system_pods.go:61] "coredns-6f6b679f8f-zc8jj" [b4a9df39-b09d-4bc3-97f6-b3176ff8e842] Running
	I0815 16:31:57.894511    3848 system_pods.go:61] "etcd-ha-138000" [c2e7e508-a157-4581-a446-2f48e51c0c16] Running
	I0815 16:31:57.894514    3848 system_pods.go:61] "etcd-ha-138000-m02" [4f371d3f-821b-4eae-ab52-5b75d90acd67] Running
	I0815 16:31:57.894516    3848 system_pods.go:61] "etcd-ha-138000-m03" [ebdd90b3-c3b2-44c2-a411-4f6afa67a455] Running
	I0815 16:31:57.894519    3848 system_pods.go:61] "kindnet-77dc6" [24bfc069-9446-43a7-aa61-308c1a62a20e] Running
	I0815 16:31:57.894522    3848 system_pods.go:61] "kindnet-dsvxt" [7645852b-8535-4cb4-8324-15c9b55176e3] Running
	I0815 16:31:57.894525    3848 system_pods.go:61] "kindnet-m887r" [ba31865b-c712-47a8-9fd8-06420270ac8b] Running
	I0815 16:31:57.894527    3848 system_pods.go:61] "kindnet-z6mnx" [fc6211a0-df99-4350-90ba-a18f74bd1bfc] Running
	I0815 16:31:57.894530    3848 system_pods.go:61] "kube-apiserver-ha-138000" [bbc684f7-d8e2-44f2-987a-df9d05cf54fc] Running
	I0815 16:31:57.894534    3848 system_pods.go:61] "kube-apiserver-ha-138000-m02" [c30e129b-f475-4131-a25e-7eeecb39cbea] Running
	I0815 16:31:57.894537    3848 system_pods.go:61] "kube-apiserver-ha-138000-m03" [90304f02-10f2-446d-b731-674df5602401] Running
	I0815 16:31:57.894541    3848 system_pods.go:61] "kube-controller-manager-ha-138000" [1d4148d1-9798-4662-91de-d9a7dae634e2] Running
	I0815 16:31:57.894545    3848 system_pods.go:61] "kube-controller-manager-ha-138000-m02" [ad693dae-cb0a-46ca-955b-33a9465d911a] Running
	I0815 16:31:57.894547    3848 system_pods.go:61] "kube-controller-manager-ha-138000-m03" [11aa509a-dade-4b66-b157-4d4cccb7f349] Running
	I0815 16:31:57.894550    3848 system_pods.go:61] "kube-proxy-cznkn" [61cd1a9a-80fb-4d0e-a4dd-f5ed2d6ddc0f] Running
	I0815 16:31:57.894553    3848 system_pods.go:61] "kube-proxy-kxghx" [27f61dcc-2812-4038-af2a-aa9f46033869] Running
	I0815 16:31:57.894555    3848 system_pods.go:61] "kube-proxy-qpth7" [a343f80b-0fe9-4c88-9782-5fbf9a6170d1] Running
	I0815 16:31:57.894558    3848 system_pods.go:61] "kube-proxy-tf79g" [ef8a2aeb-55c4-436e-a306-da8b93c9707f] Running
	I0815 16:31:57.894560    3848 system_pods.go:61] "kube-scheduler-ha-138000" [ee1d3eb0-596d-4bf0-bed4-dbd38b286e38] Running
	I0815 16:31:57.894563    3848 system_pods.go:61] "kube-scheduler-ha-138000-m02" [1cc29309-49ad-427e-862a-758cc5711eef] Running
	I0815 16:31:57.894566    3848 system_pods.go:61] "kube-scheduler-ha-138000-m03" [2e3efb3c-6903-4306-adec-d4a47b3a56bd] Running
	I0815 16:31:57.894572    3848 system_pods.go:61] "kube-vip-ha-138000" [1b8c66e5-ffaa-4d4b-a220-8eb664b79d91] Running
	I0815 16:31:57.894575    3848 system_pods.go:61] "kube-vip-ha-138000-m02" [a0fe2ba6-1ace-456f-ab2e-15c43303dcad] Running
	I0815 16:31:57.894578    3848 system_pods.go:61] "kube-vip-ha-138000-m03" [44fe2bd5-bd48-4f86-b38a-7e4b3554174c] Running
	I0815 16:31:57.894581    3848 system_pods.go:61] "storage-provisioner" [35f3a9d7-cb68-4b10-82a2-1bd72d8d1aa6] Running
	I0815 16:31:57.894585    3848 system_pods.go:74] duration metric: took 190.369062ms to wait for pod list to return data ...
	I0815 16:31:57.894590    3848 default_sa.go:34] waiting for default service account to be created ...
	I0815 16:31:58.083903    3848 request.go:632] Waited for 189.255195ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0815 16:31:58.083992    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0815 16:31:58.084004    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:58.084016    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:58.084024    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:58.087624    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:58.087687    3848 default_sa.go:45] found service account: "default"
	I0815 16:31:58.087696    3848 default_sa.go:55] duration metric: took 193.101509ms for default service account to be created ...
	I0815 16:31:58.087703    3848 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 16:31:58.284595    3848 request.go:632] Waited for 196.812141ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0815 16:31:58.284716    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0815 16:31:58.284728    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:58.284740    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:58.284748    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:58.290177    3848 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 16:31:58.295724    3848 system_pods.go:86] 26 kube-system pods found
	I0815 16:31:58.295738    3848 system_pods.go:89] "coredns-6f6b679f8f-dmgt5" [47d73953-ec2c-4f17-b2b8-d6a9b5e5a316] Running
	I0815 16:31:58.295742    3848 system_pods.go:89] "coredns-6f6b679f8f-zc8jj" [b4a9df39-b09d-4bc3-97f6-b3176ff8e842] Running
	I0815 16:31:58.295747    3848 system_pods.go:89] "etcd-ha-138000" [c2e7e508-a157-4581-a446-2f48e51c0c16] Running
	I0815 16:31:58.295759    3848 system_pods.go:89] "etcd-ha-138000-m02" [4f371d3f-821b-4eae-ab52-5b75d90acd67] Running
	I0815 16:31:58.295765    3848 system_pods.go:89] "etcd-ha-138000-m03" [ebdd90b3-c3b2-44c2-a411-4f6afa67a455] Running
	I0815 16:31:58.295768    3848 system_pods.go:89] "kindnet-77dc6" [24bfc069-9446-43a7-aa61-308c1a62a20e] Running
	I0815 16:31:58.295779    3848 system_pods.go:89] "kindnet-dsvxt" [7645852b-8535-4cb4-8324-15c9b55176e3] Running
	I0815 16:31:58.295783    3848 system_pods.go:89] "kindnet-m887r" [ba31865b-c712-47a8-9fd8-06420270ac8b] Running
	I0815 16:31:58.295786    3848 system_pods.go:89] "kindnet-z6mnx" [fc6211a0-df99-4350-90ba-a18f74bd1bfc] Running
	I0815 16:31:58.295789    3848 system_pods.go:89] "kube-apiserver-ha-138000" [bbc684f7-d8e2-44f2-987a-df9d05cf54fc] Running
	I0815 16:31:58.295791    3848 system_pods.go:89] "kube-apiserver-ha-138000-m02" [c30e129b-f475-4131-a25e-7eeecb39cbea] Running
	I0815 16:31:58.295795    3848 system_pods.go:89] "kube-apiserver-ha-138000-m03" [90304f02-10f2-446d-b731-674df5602401] Running
	I0815 16:31:58.295798    3848 system_pods.go:89] "kube-controller-manager-ha-138000" [1d4148d1-9798-4662-91de-d9a7dae634e2] Running
	I0815 16:31:58.295801    3848 system_pods.go:89] "kube-controller-manager-ha-138000-m02" [ad693dae-cb0a-46ca-955b-33a9465d911a] Running
	I0815 16:31:58.295804    3848 system_pods.go:89] "kube-controller-manager-ha-138000-m03" [11aa509a-dade-4b66-b157-4d4cccb7f349] Running
	I0815 16:31:58.295807    3848 system_pods.go:89] "kube-proxy-cznkn" [61cd1a9a-80fb-4d0e-a4dd-f5ed2d6ddc0f] Running
	I0815 16:31:58.295814    3848 system_pods.go:89] "kube-proxy-kxghx" [27f61dcc-2812-4038-af2a-aa9f46033869] Running
	I0815 16:31:58.295818    3848 system_pods.go:89] "kube-proxy-qpth7" [a343f80b-0fe9-4c88-9782-5fbf9a6170d1] Running
	I0815 16:31:58.295821    3848 system_pods.go:89] "kube-proxy-tf79g" [ef8a2aeb-55c4-436e-a306-da8b93c9707f] Running
	I0815 16:31:58.295824    3848 system_pods.go:89] "kube-scheduler-ha-138000" [ee1d3eb0-596d-4bf0-bed4-dbd38b286e38] Running
	I0815 16:31:58.295827    3848 system_pods.go:89] "kube-scheduler-ha-138000-m02" [1cc29309-49ad-427e-862a-758cc5711eef] Running
	I0815 16:31:58.295830    3848 system_pods.go:89] "kube-scheduler-ha-138000-m03" [2e3efb3c-6903-4306-adec-d4a47b3a56bd] Running
	I0815 16:31:58.295833    3848 system_pods.go:89] "kube-vip-ha-138000" [1b8c66e5-ffaa-4d4b-a220-8eb664b79d91] Running
	I0815 16:31:58.295836    3848 system_pods.go:89] "kube-vip-ha-138000-m02" [a0fe2ba6-1ace-456f-ab2e-15c43303dcad] Running
	I0815 16:31:58.295838    3848 system_pods.go:89] "kube-vip-ha-138000-m03" [44fe2bd5-bd48-4f86-b38a-7e4b3554174c] Running
	I0815 16:31:58.295841    3848 system_pods.go:89] "storage-provisioner" [35f3a9d7-cb68-4b10-82a2-1bd72d8d1aa6] Running
	I0815 16:31:58.295845    3848 system_pods.go:126] duration metric: took 208.13908ms to wait for k8s-apps to be running ...
	I0815 16:31:58.295851    3848 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 16:31:58.295902    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 16:31:58.307696    3848 system_svc.go:56] duration metric: took 11.840404ms WaitForService to wait for kubelet
	I0815 16:31:58.307710    3848 kubeadm.go:582] duration metric: took 21.445104276s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 16:31:58.307721    3848 node_conditions.go:102] verifying NodePressure condition ...
	I0815 16:31:58.483467    3848 request.go:632] Waited for 175.699042ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0815 16:31:58.483523    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0815 16:31:58.483531    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:58.483546    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:58.483605    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:58.487271    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:58.488234    3848 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 16:31:58.488246    3848 node_conditions.go:123] node cpu capacity is 2
	I0815 16:31:58.488253    3848 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 16:31:58.488256    3848 node_conditions.go:123] node cpu capacity is 2
	I0815 16:31:58.488259    3848 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 16:31:58.488263    3848 node_conditions.go:123] node cpu capacity is 2
	I0815 16:31:58.488266    3848 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 16:31:58.488269    3848 node_conditions.go:123] node cpu capacity is 2
	I0815 16:31:58.488272    3848 node_conditions.go:105] duration metric: took 180.547852ms to run NodePressure ...
	I0815 16:31:58.488280    3848 start.go:241] waiting for startup goroutines ...
	I0815 16:31:58.488303    3848 start.go:255] writing updated cluster config ...
	I0815 16:31:58.511626    3848 out.go:201] 
	I0815 16:31:58.532028    3848 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:31:58.532166    3848 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/config.json ...
	I0815 16:31:58.553589    3848 out.go:177] * Starting "ha-138000-m04" worker node in "ha-138000" cluster
	I0815 16:31:58.594430    3848 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 16:31:58.594502    3848 cache.go:56] Caching tarball of preloaded images
	I0815 16:31:58.594676    3848 preload.go:172] Found /Users/jenkins/minikube-integration/19452-977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0815 16:31:58.594694    3848 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 16:31:58.594833    3848 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/config.json ...
	I0815 16:31:58.595712    3848 start.go:360] acquireMachinesLock for ha-138000-m04: {Name:mkac8034c8a60617271f2754a03dcaf74fc4fef7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 16:31:58.595816    3848 start.go:364] duration metric: took 79.794µs to acquireMachinesLock for "ha-138000-m04"
	I0815 16:31:58.595841    3848 start.go:96] Skipping create...Using existing machine configuration
	I0815 16:31:58.595851    3848 fix.go:54] fixHost starting: m04
	I0815 16:31:58.596274    3848 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:31:58.596311    3848 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:31:58.605762    3848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52311
	I0815 16:31:58.606137    3848 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:31:58.606475    3848 main.go:141] libmachine: Using API Version  1
	I0815 16:31:58.606484    3848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:31:58.606737    3848 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:31:58.606878    3848 main.go:141] libmachine: (ha-138000-m04) Calling .DriverName
	I0815 16:31:58.606971    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetState
	I0815 16:31:58.607059    3848 main.go:141] libmachine: (ha-138000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:31:58.607149    3848 main.go:141] libmachine: (ha-138000-m04) DBG | hyperkit pid from json: 3240
	I0815 16:31:58.608054    3848 main.go:141] libmachine: (ha-138000-m04) DBG | hyperkit pid 3240 missing from process table
	I0815 16:31:58.608090    3848 fix.go:112] recreateIfNeeded on ha-138000-m04: state=Stopped err=<nil>
	I0815 16:31:58.608101    3848 main.go:141] libmachine: (ha-138000-m04) Calling .DriverName
	W0815 16:31:58.608193    3848 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 16:31:58.629670    3848 out.go:177] * Restarting existing hyperkit VM for "ha-138000-m04" ...
	I0815 16:31:58.671397    3848 main.go:141] libmachine: (ha-138000-m04) Calling .Start
	I0815 16:31:58.671607    3848 main.go:141] libmachine: (ha-138000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:31:58.671648    3848 main.go:141] libmachine: (ha-138000-m04) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/hyperkit.pid
	I0815 16:31:58.671760    3848 main.go:141] libmachine: (ha-138000-m04) DBG | Using UUID e49817f2-f6c4-46a0-a846-8a8b2da04ea9
	I0815 16:31:58.700620    3848 main.go:141] libmachine: (ha-138000-m04) DBG | Generated MAC 66:d1:6e:6f:24:26
	I0815 16:31:58.700645    3848 main.go:141] libmachine: (ha-138000-m04) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000
	I0815 16:31:58.700779    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:58 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"e49817f2-f6c4-46a0-a846-8a8b2da04ea9", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002ad680)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0815 16:31:58.700809    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:58 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"e49817f2-f6c4-46a0-a846-8a8b2da04ea9", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002ad680)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0815 16:31:58.700889    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:58 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "e49817f2-f6c4-46a0-a846-8a8b2da04ea9", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/ha-138000-m04.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/tty,log=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/bzimage,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-13
8000-m04/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000"}
	I0815 16:31:58.700927    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:58 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U e49817f2-f6c4-46a0-a846-8a8b2da04ea9 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/ha-138000-m04.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/tty,log=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/console-ring -f kexec,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/bzimage,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/initrd,earlyprintk=serial loglevel=3 console=ttyS0 co
nsole=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000"
	I0815 16:31:58.700973    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:58 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0815 16:31:58.702332    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:58 DEBUG: hyperkit: Pid is 4201
	I0815 16:31:58.702793    3848 main.go:141] libmachine: (ha-138000-m04) DBG | Attempt 0
	I0815 16:31:58.702829    3848 main.go:141] libmachine: (ha-138000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:31:58.702904    3848 main.go:141] libmachine: (ha-138000-m04) DBG | hyperkit pid from json: 4201
	I0815 16:31:58.703953    3848 main.go:141] libmachine: (ha-138000-m04) DBG | Searching for 66:d1:6e:6f:24:26 in /var/db/dhcpd_leases ...
	I0815 16:31:58.704027    3848 main.go:141] libmachine: (ha-138000-m04) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0815 16:31:58.704048    3848 main.go:141] libmachine: (ha-138000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 16:31:58.704066    3848 main.go:141] libmachine: (ha-138000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:31:58.704081    3848 main.go:141] libmachine: (ha-138000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:31:58.704095    3848 main.go:141] libmachine: (ha-138000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be8e15}
	I0815 16:31:58.704105    3848 main.go:141] libmachine: (ha-138000-m04) DBG | Found match: 66:d1:6e:6f:24:26
	I0815 16:31:58.704118    3848 main.go:141] libmachine: (ha-138000-m04) DBG | IP: 192.169.0.8
	I0815 16:31:58.704138    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetConfigRaw
	I0815 16:31:58.704996    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetIP
	I0815 16:31:58.705244    3848 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/config.json ...
	I0815 16:31:58.705856    3848 machine.go:93] provisionDockerMachine start ...
	I0815 16:31:58.705869    3848 main.go:141] libmachine: (ha-138000-m04) Calling .DriverName
	I0815 16:31:58.705978    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHHostname
	I0815 16:31:58.706098    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHPort
	I0815 16:31:58.706206    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:31:58.706333    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:31:58.706439    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHUsername
	I0815 16:31:58.706614    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:31:58.706786    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0815 16:31:58.706796    3848 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 16:31:58.710462    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:58 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0815 16:31:58.720101    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:58 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0815 16:31:58.720991    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0815 16:31:58.721013    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0815 16:31:58.721022    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0815 16:31:58.721032    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0815 16:31:59.105309    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:59 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0815 16:31:59.105335    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:59 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0815 16:31:59.220059    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0815 16:31:59.220079    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0815 16:31:59.220089    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0815 16:31:59.220095    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0815 16:31:59.220911    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:59 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0815 16:31:59.220942    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:59 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0815 16:32:04.889008    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:32:04 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0815 16:32:04.889030    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:32:04 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0815 16:32:04.889049    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:32:04 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0815 16:32:04.912331    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:32:04 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0815 16:32:33.787060    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 16:32:33.787084    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetMachineName
	I0815 16:32:33.787215    3848 buildroot.go:166] provisioning hostname "ha-138000-m04"
	I0815 16:32:33.787226    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetMachineName
	I0815 16:32:33.787318    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHHostname
	I0815 16:32:33.787397    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHPort
	I0815 16:32:33.787483    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:33.787564    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:33.787640    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHUsername
	I0815 16:32:33.787765    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:32:33.787937    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0815 16:32:33.787945    3848 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-138000-m04 && echo "ha-138000-m04" | sudo tee /etc/hostname
	I0815 16:32:33.847992    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-138000-m04
	
	I0815 16:32:33.848008    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHHostname
	I0815 16:32:33.848137    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHPort
	I0815 16:32:33.848240    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:33.848322    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:33.848426    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHUsername
	I0815 16:32:33.848548    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:32:33.848705    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0815 16:32:33.848716    3848 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-138000-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-138000-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-138000-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 16:32:33.904813    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 16:32:33.904838    3848 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19452-977/.minikube CaCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19452-977/.minikube}
	I0815 16:32:33.904848    3848 buildroot.go:174] setting up certificates
	I0815 16:32:33.904853    3848 provision.go:84] configureAuth start
	I0815 16:32:33.904860    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetMachineName
	I0815 16:32:33.904995    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetIP
	I0815 16:32:33.905084    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHHostname
	I0815 16:32:33.905176    3848 provision.go:143] copyHostCerts
	I0815 16:32:33.905203    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem
	I0815 16:32:33.905264    3848 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem, removing ...
	I0815 16:32:33.905280    3848 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem
	I0815 16:32:33.915862    3848 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem (1082 bytes)
	I0815 16:32:33.936338    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem
	I0815 16:32:33.936399    3848 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem, removing ...
	I0815 16:32:33.936405    3848 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem
	I0815 16:32:33.960707    3848 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem (1123 bytes)
	I0815 16:32:33.961241    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem
	I0815 16:32:33.961296    3848 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem, removing ...
	I0815 16:32:33.961303    3848 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem
	I0815 16:32:33.961391    3848 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem (1679 bytes)
	I0815 16:32:33.961771    3848 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem org=jenkins.ha-138000-m04 san=[127.0.0.1 192.169.0.8 ha-138000-m04 localhost minikube]
	I0815 16:32:34.048242    3848 provision.go:177] copyRemoteCerts
	I0815 16:32:34.048297    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 16:32:34.048312    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHHostname
	I0815 16:32:34.048461    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHPort
	I0815 16:32:34.048558    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:34.048644    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHUsername
	I0815 16:32:34.048725    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/id_rsa Username:docker}
	I0815 16:32:34.079744    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0815 16:32:34.079820    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 16:32:34.099832    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0815 16:32:34.099904    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0815 16:32:34.119955    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0815 16:32:34.120035    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 16:32:34.140743    3848 provision.go:87] duration metric: took 235.600662ms to configureAuth
	I0815 16:32:34.140757    3848 buildroot.go:189] setting minikube options for container-runtime
	I0815 16:32:34.140940    3848 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:32:34.140975    3848 main.go:141] libmachine: (ha-138000-m04) Calling .DriverName
	I0815 16:32:34.141106    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHHostname
	I0815 16:32:34.141218    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHPort
	I0815 16:32:34.141307    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:34.141393    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:34.141471    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHUsername
	I0815 16:32:34.141580    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:32:34.141705    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0815 16:32:34.141713    3848 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0815 16:32:34.191590    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0815 16:32:34.191604    3848 buildroot.go:70] root file system type: tmpfs
	I0815 16:32:34.191676    3848 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0815 16:32:34.191686    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHHostname
	I0815 16:32:34.191824    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHPort
	I0815 16:32:34.191939    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:34.192031    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:34.192133    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHUsername
	I0815 16:32:34.192260    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:32:34.192405    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0815 16:32:34.192449    3848 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0815 16:32:34.253544    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	Environment=NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0815 16:32:34.253562    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHHostname
	I0815 16:32:34.253696    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHPort
	I0815 16:32:34.253789    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:34.253863    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:34.253953    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHUsername
	I0815 16:32:34.254084    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:32:34.254223    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0815 16:32:34.254235    3848 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0815 16:32:35.839568    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0815 16:32:35.839584    3848 machine.go:96] duration metric: took 37.11179722s to provisionDockerMachine
	I0815 16:32:35.839591    3848 start.go:293] postStartSetup for "ha-138000-m04" (driver="hyperkit")
	I0815 16:32:35.839597    3848 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 16:32:35.839606    3848 main.go:141] libmachine: (ha-138000-m04) Calling .DriverName
	I0815 16:32:35.839797    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 16:32:35.839811    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHHostname
	I0815 16:32:35.839906    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHPort
	I0815 16:32:35.839987    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:35.840069    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHUsername
	I0815 16:32:35.840139    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/id_rsa Username:docker}
	I0815 16:32:35.872247    3848 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 16:32:35.875358    3848 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 16:32:35.875369    3848 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19452-977/.minikube/addons for local assets ...
	I0815 16:32:35.875469    3848 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19452-977/.minikube/files for local assets ...
	I0815 16:32:35.875649    3848 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> 14982.pem in /etc/ssl/certs
	I0815 16:32:35.875656    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> /etc/ssl/certs/14982.pem
	I0815 16:32:35.875856    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 16:32:35.884005    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem --> /etc/ssl/certs/14982.pem (1708 bytes)
	I0815 16:32:35.903707    3848 start.go:296] duration metric: took 64.039683ms for postStartSetup
	I0815 16:32:35.903730    3848 main.go:141] libmachine: (ha-138000-m04) Calling .DriverName
	I0815 16:32:35.903903    3848 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0815 16:32:35.903917    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHHostname
	I0815 16:32:35.904012    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHPort
	I0815 16:32:35.904095    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:35.904168    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHUsername
	I0815 16:32:35.904243    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/id_rsa Username:docker}
	I0815 16:32:35.936201    3848 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0815 16:32:35.936261    3848 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0815 16:32:35.969821    3848 fix.go:56] duration metric: took 37.351909726s for fixHost
	I0815 16:32:35.969846    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHHostname
	I0815 16:32:35.969981    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHPort
	I0815 16:32:35.970066    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:35.970160    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:35.970248    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHUsername
	I0815 16:32:35.970357    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:32:35.970503    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0815 16:32:35.970511    3848 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 16:32:36.019594    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723764755.882644542
	
	I0815 16:32:36.019607    3848 fix.go:216] guest clock: 1723764755.882644542
	I0815 16:32:36.019612    3848 fix.go:229] Guest: 2024-08-15 16:32:35.882644542 -0700 PDT Remote: 2024-08-15 16:32:35.969836 -0700 PDT m=+161.949888378 (delta=-87.191458ms)
	I0815 16:32:36.019628    3848 fix.go:200] guest clock delta is within tolerance: -87.191458ms
	I0815 16:32:36.019633    3848 start.go:83] releasing machines lock for "ha-138000-m04", held for 37.401695552s
	I0815 16:32:36.019652    3848 main.go:141] libmachine: (ha-138000-m04) Calling .DriverName
	I0815 16:32:36.019780    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetIP
	I0815 16:32:36.042030    3848 out.go:177] * Found network options:
	I0815 16:32:36.062147    3848 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	W0815 16:32:36.083026    3848 proxy.go:119] fail to check proxy env: Error ip not in block
	W0815 16:32:36.083070    3848 proxy.go:119] fail to check proxy env: Error ip not in block
	W0815 16:32:36.083084    3848 proxy.go:119] fail to check proxy env: Error ip not in block
	I0815 16:32:36.083102    3848 main.go:141] libmachine: (ha-138000-m04) Calling .DriverName
	I0815 16:32:36.083847    3848 main.go:141] libmachine: (ha-138000-m04) Calling .DriverName
	I0815 16:32:36.084058    3848 main.go:141] libmachine: (ha-138000-m04) Calling .DriverName
	I0815 16:32:36.084240    3848 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 16:32:36.084283    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHHostname
	W0815 16:32:36.084353    3848 proxy.go:119] fail to check proxy env: Error ip not in block
	W0815 16:32:36.084375    3848 proxy.go:119] fail to check proxy env: Error ip not in block
	W0815 16:32:36.084394    3848 proxy.go:119] fail to check proxy env: Error ip not in block
	I0815 16:32:36.084487    3848 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0815 16:32:36.084508    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHPort
	I0815 16:32:36.084519    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHHostname
	I0815 16:32:36.084733    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:36.084745    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHPort
	I0815 16:32:36.084957    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHUsername
	I0815 16:32:36.084992    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:36.085156    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHUsername
	I0815 16:32:36.085189    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/id_rsa Username:docker}
	I0815 16:32:36.085315    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/id_rsa Username:docker}
	W0815 16:32:36.114740    3848 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 16:32:36.114803    3848 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 16:32:36.163124    3848 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 16:32:36.163145    3848 start.go:495] detecting cgroup driver to use...
	I0815 16:32:36.163258    3848 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 16:32:36.179534    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0815 16:32:36.187872    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0815 16:32:36.196474    3848 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0815 16:32:36.196528    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0815 16:32:36.204752    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 16:32:36.212948    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0815 16:32:36.221222    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 16:32:36.229511    3848 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 16:32:36.238142    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0815 16:32:36.246643    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0815 16:32:36.254862    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0815 16:32:36.263281    3848 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 16:32:36.270596    3848 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 16:32:36.278325    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:32:36.377803    3848 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0815 16:32:36.396329    3848 start.go:495] detecting cgroup driver to use...
	I0815 16:32:36.396399    3848 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0815 16:32:36.411192    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 16:32:36.423875    3848 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 16:32:36.437859    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 16:32:36.449142    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0815 16:32:36.460191    3848 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0815 16:32:36.479331    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0815 16:32:36.491179    3848 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 16:32:36.506341    3848 ssh_runner.go:195] Run: which cri-dockerd
	I0815 16:32:36.509156    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0815 16:32:36.517306    3848 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0815 16:32:36.530887    3848 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0815 16:32:36.631226    3848 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0815 16:32:36.742723    3848 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0815 16:32:36.742750    3848 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0815 16:32:36.756569    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:32:36.851332    3848 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0815 16:32:39.062024    3848 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.208594053s)
	I0815 16:32:39.062086    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0815 16:32:39.072858    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0815 16:32:39.083135    3848 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0815 16:32:39.180174    3848 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0815 16:32:39.296201    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:32:39.397264    3848 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0815 16:32:39.409768    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0815 16:32:39.419919    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:32:39.520172    3848 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0815 16:32:39.580712    3848 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0815 16:32:39.580787    3848 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0815 16:32:39.585172    3848 start.go:563] Will wait 60s for crictl version
	I0815 16:32:39.585233    3848 ssh_runner.go:195] Run: which crictl
	I0815 16:32:39.588436    3848 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 16:32:39.616400    3848 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0815 16:32:39.616480    3848 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0815 16:32:39.635416    3848 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0815 16:32:39.674509    3848 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0815 16:32:39.715170    3848 out.go:177]   - env NO_PROXY=192.169.0.5
	I0815 16:32:39.736207    3848 out.go:177]   - env NO_PROXY=192.169.0.5,192.169.0.6
	I0815 16:32:39.756990    3848 out.go:177]   - env NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	I0815 16:32:39.778125    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetIP
	I0815 16:32:39.778383    3848 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0815 16:32:39.781735    3848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 16:32:39.792335    3848 mustload.go:65] Loading cluster: ha-138000
	I0815 16:32:39.792518    3848 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:32:39.792754    3848 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:32:39.792777    3848 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:32:39.801573    3848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52333
	I0815 16:32:39.801892    3848 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:32:39.802227    3848 main.go:141] libmachine: Using API Version  1
	I0815 16:32:39.802235    3848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:32:39.802431    3848 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:32:39.802539    3848 main.go:141] libmachine: (ha-138000) Calling .GetState
	I0815 16:32:39.802617    3848 main.go:141] libmachine: (ha-138000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:32:39.802698    3848 main.go:141] libmachine: (ha-138000) DBG | hyperkit pid from json: 3862
	I0815 16:32:39.803669    3848 host.go:66] Checking if "ha-138000" exists ...
	I0815 16:32:39.803925    3848 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:32:39.803948    3848 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:32:39.812411    3848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52335
	I0815 16:32:39.812752    3848 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:32:39.813108    3848 main.go:141] libmachine: Using API Version  1
	I0815 16:32:39.813119    3848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:32:39.813352    3848 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:32:39.813479    3848 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:32:39.813578    3848 certs.go:68] Setting up /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000 for IP: 192.169.0.8
	I0815 16:32:39.813584    3848 certs.go:194] generating shared ca certs ...
	I0815 16:32:39.813595    3848 certs.go:226] acquiring lock for ca certs: {Name:mka1c88019f6064d2983ac988db71a67aeb65696 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:32:39.813775    3848 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.key
	I0815 16:32:39.813853    3848 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.key
	I0815 16:32:39.813863    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0815 16:32:39.813888    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0815 16:32:39.813907    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0815 16:32:39.813924    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0815 16:32:39.814032    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498.pem (1338 bytes)
	W0815 16:32:39.814088    3848 certs.go:480] ignoring /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498_empty.pem, impossibly tiny 0 bytes
	I0815 16:32:39.814098    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 16:32:39.814142    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem (1082 bytes)
	I0815 16:32:39.814184    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem (1123 bytes)
	I0815 16:32:39.814213    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem (1679 bytes)
	I0815 16:32:39.814289    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem (1708 bytes)
	I0815 16:32:39.814324    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:32:39.814344    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498.pem -> /usr/share/ca-certificates/1498.pem
	I0815 16:32:39.814362    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> /usr/share/ca-certificates/14982.pem
	I0815 16:32:39.814393    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 16:32:39.834330    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 16:32:39.854069    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 16:32:39.873582    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 16:32:39.893143    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 16:32:39.912645    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498.pem --> /usr/share/ca-certificates/1498.pem (1338 bytes)
	I0815 16:32:39.932104    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem --> /usr/share/ca-certificates/14982.pem (1708 bytes)
	I0815 16:32:39.951872    3848 ssh_runner.go:195] Run: openssl version
	I0815 16:32:39.956296    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 16:32:39.966055    3848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:32:39.970287    3848 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:05 /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:32:39.970366    3848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:32:39.974984    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 16:32:39.984513    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1498.pem && ln -fs /usr/share/ca-certificates/1498.pem /etc/ssl/certs/1498.pem"
	I0815 16:32:39.994098    3848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1498.pem
	I0815 16:32:39.997571    3848 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 23:14 /usr/share/ca-certificates/1498.pem
	I0815 16:32:39.997641    3848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1498.pem
	I0815 16:32:40.002092    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1498.pem /etc/ssl/certs/51391683.0"
	I0815 16:32:40.011802    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14982.pem && ln -fs /usr/share/ca-certificates/14982.pem /etc/ssl/certs/14982.pem"
	I0815 16:32:40.021159    3848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14982.pem
	I0815 16:32:40.024904    3848 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 23:14 /usr/share/ca-certificates/14982.pem
	I0815 16:32:40.024948    3848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14982.pem
	I0815 16:32:40.029236    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14982.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 16:32:40.038952    3848 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 16:32:40.042186    3848 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0815 16:32:40.042220    3848 kubeadm.go:934] updating node {m04 192.169.0.8 0 v1.31.0 docker false true} ...
	I0815 16:32:40.042279    3848 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-138000-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.8
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-138000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 16:32:40.042327    3848 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 16:32:40.050823    3848 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 16:32:40.050877    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0815 16:32:40.059254    3848 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0815 16:32:40.072800    3848 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 16:32:40.086506    3848 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0815 16:32:40.089484    3848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 16:32:40.099835    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:32:40.204428    3848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 16:32:40.219160    3848 start.go:235] Will wait 6m0s for node &{Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0815 16:32:40.219362    3848 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:32:40.240563    3848 out.go:177] * Verifying Kubernetes components...
	I0815 16:32:40.281239    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:32:40.407726    3848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 16:32:40.424517    3848 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19452-977/kubeconfig
	I0815 16:32:40.424746    3848 kapi.go:59] client config for ha-138000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/client.key", CAFile:"/Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, U
serAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x3a6cf60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0815 16:32:40.424790    3848 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0815 16:32:40.424946    3848 node_ready.go:35] waiting up to 6m0s for node "ha-138000-m04" to be "Ready" ...
	I0815 16:32:40.424985    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m04
	I0815 16:32:40.424990    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:40.424997    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:40.425001    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:40.429695    3848 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 16:32:40.925699    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m04
	I0815 16:32:40.925718    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:40.925730    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:40.925735    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:40.928643    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:32:40.929158    3848 node_ready.go:49] node "ha-138000-m04" has status "Ready":"True"
	I0815 16:32:40.929170    3848 node_ready.go:38] duration metric: took 503.811986ms for node "ha-138000-m04" to be "Ready" ...
	I0815 16:32:40.929177    3848 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 16:32:40.929232    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0815 16:32:40.929240    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:40.929248    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:40.929253    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:40.932889    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:40.938534    3848 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-dmgt5" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:40.938586    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-dmgt5
	I0815 16:32:40.938591    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:40.938597    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:40.938601    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:40.940630    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:32:40.941135    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:32:40.941143    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:40.941149    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:40.941155    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:40.943092    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:32:40.943437    3848 pod_ready.go:93] pod "coredns-6f6b679f8f-dmgt5" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:40.943446    3848 pod_ready.go:82] duration metric: took 4.897461ms for pod "coredns-6f6b679f8f-dmgt5" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:40.943453    3848 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-zc8jj" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:40.943484    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-zc8jj
	I0815 16:32:40.943489    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:40.943495    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:40.943498    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:40.945206    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:32:40.945690    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:32:40.945697    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:40.945703    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:40.945706    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:40.947257    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:32:40.947557    3848 pod_ready.go:93] pod "coredns-6f6b679f8f-zc8jj" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:40.947566    3848 pod_ready.go:82] duration metric: took 4.10464ms for pod "coredns-6f6b679f8f-zc8jj" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:40.947580    3848 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:40.947611    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000
	I0815 16:32:40.947616    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:40.947622    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:40.947625    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:40.949227    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:32:40.949563    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:32:40.949570    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:40.949576    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:40.949579    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:40.951175    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:32:40.951528    3848 pod_ready.go:93] pod "etcd-ha-138000" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:40.951537    3848 pod_ready.go:82] duration metric: took 3.9487ms for pod "etcd-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:40.951543    3848 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:40.951576    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m02
	I0815 16:32:40.951581    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:40.951587    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:40.951590    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:40.953480    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:32:40.953888    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:32:40.953896    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:40.953902    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:40.953906    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:40.956234    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:32:40.956704    3848 pod_ready.go:93] pod "etcd-ha-138000-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:40.956713    3848 pod_ready.go:82] duration metric: took 5.161406ms for pod "etcd-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:40.956719    3848 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:41.126239    3848 request.go:632] Waited for 169.295221ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:32:41.126310    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:32:41.126326    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:41.126342    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:41.126348    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:41.129984    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:41.327227    3848 request.go:632] Waited for 196.482674ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:32:41.327282    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:32:41.327327    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:41.327340    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:41.327346    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:41.330300    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:32:41.330659    3848 pod_ready.go:93] pod "etcd-ha-138000-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:41.330669    3848 pod_ready.go:82] duration metric: took 373.660924ms for pod "etcd-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:41.330681    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:41.526448    3848 request.go:632] Waited for 195.583591ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000
	I0815 16:32:41.526543    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000
	I0815 16:32:41.526554    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:41.526567    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:41.526577    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:41.532016    3848 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 16:32:41.726373    3848 request.go:632] Waited for 193.637616ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:32:41.726406    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:32:41.726411    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:41.726417    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:41.726421    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:41.728634    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:32:41.729100    3848 pod_ready.go:93] pod "kube-apiserver-ha-138000" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:41.729111    3848 pod_ready.go:82] duration metric: took 398.123683ms for pod "kube-apiserver-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:41.729118    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:41.926911    3848 request.go:632] Waited for 197.603818ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:32:41.927000    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:32:41.927007    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:41.927013    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:41.927017    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:41.929844    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:32:42.128208    3848 request.go:632] Waited for 197.600405ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:32:42.128281    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:32:42.128287    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:42.128294    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:42.128297    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:42.130511    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:32:42.130893    3848 pod_ready.go:93] pod "kube-apiserver-ha-138000-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:42.130903    3848 pod_ready.go:82] duration metric: took 401.488989ms for pod "kube-apiserver-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:42.130910    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:42.326992    3848 request.go:632] Waited for 195.89771ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m03
	I0815 16:32:42.327104    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m03
	I0815 16:32:42.327117    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:42.327128    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:42.327133    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:42.330012    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:32:42.528721    3848 request.go:632] Waited for 197.972621ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:32:42.528810    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:32:42.528823    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:42.528832    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:42.528839    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:42.531660    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:32:42.532014    3848 pod_ready.go:93] pod "kube-apiserver-ha-138000-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:42.532023    3848 pod_ready.go:82] duration metric: took 400.824225ms for pod "kube-apiserver-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:42.532031    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:42.728571    3848 request.go:632] Waited for 196.361424ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:32:42.728605    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:32:42.728614    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:42.728647    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:42.728651    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:42.731003    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:32:42.928382    3848 request.go:632] Waited for 196.815945ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:32:42.928456    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:32:42.928464    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:42.928472    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:42.928479    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:42.930971    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:32:42.931316    3848 pod_ready.go:93] pod "kube-controller-manager-ha-138000" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:42.931325    3848 pod_ready.go:82] duration metric: took 399.007322ms for pod "kube-controller-manager-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:42.931332    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:43.127763    3848 request.go:632] Waited for 196.250954ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000-m02
	I0815 16:32:43.127817    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000-m02
	I0815 16:32:43.127830    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:43.127894    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:43.127907    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:43.131065    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:43.327999    3848 request.go:632] Waited for 196.235394ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:32:43.328052    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:32:43.328063    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:43.328073    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:43.328081    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:43.331302    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:43.331997    3848 pod_ready.go:93] pod "kube-controller-manager-ha-138000-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:43.332007    3848 pod_ready.go:82] duration metric: took 400.403262ms for pod "kube-controller-manager-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:43.332014    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:43.527716    3848 request.go:632] Waited for 195.527377ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000-m03
	I0815 16:32:43.527817    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000-m03
	I0815 16:32:43.527829    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:43.527841    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:43.527847    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:43.530965    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:43.728236    3848 request.go:632] Waited for 196.484633ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:32:43.728298    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:32:43.728309    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:43.728320    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:43.728328    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:43.731883    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:43.732469    3848 pod_ready.go:93] pod "kube-controller-manager-ha-138000-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:43.732478    3848 pod_ready.go:82] duration metric: took 400.192656ms for pod "kube-controller-manager-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:43.732484    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cznkn" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:43.928265    3848 request.go:632] Waited for 195.61986ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cznkn
	I0815 16:32:43.928325    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cznkn
	I0815 16:32:43.928331    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:43.928337    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:43.928341    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:43.930546    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:32:44.128606    3848 request.go:632] Waited for 197.39717ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:32:44.128669    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:32:44.128682    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:44.128693    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:44.128702    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:44.132274    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:44.132835    3848 pod_ready.go:93] pod "kube-proxy-cznkn" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:44.132847    3848 pod_ready.go:82] duration metric: took 400.10235ms for pod "kube-proxy-cznkn" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:44.132856    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kxghx" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:44.328927    3848 request.go:632] Waited for 195.898781ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kxghx
	I0815 16:32:44.328980    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kxghx
	I0815 16:32:44.328988    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:44.328997    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:44.329003    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:44.332425    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:44.528721    3848 request.go:632] Waited for 195.542417ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:32:44.528856    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:32:44.528867    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:44.528878    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:44.528884    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:44.532391    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:44.532921    3848 pod_ready.go:93] pod "kube-proxy-kxghx" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:44.532933    3848 pod_ready.go:82] duration metric: took 399.821933ms for pod "kube-proxy-kxghx" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:44.532943    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qpth7" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:44.729675    3848 request.go:632] Waited for 196.549445ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qpth7
	I0815 16:32:44.729804    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qpth7
	I0815 16:32:44.729823    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:44.729835    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:44.729845    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:44.733406    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:44.929790    3848 request.go:632] Waited for 195.811353ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m04
	I0815 16:32:44.929844    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m04
	I0815 16:32:44.929899    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:44.929913    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:44.929919    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:44.933124    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:44.933608    3848 pod_ready.go:93] pod "kube-proxy-qpth7" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:44.933620    3848 pod_ready.go:82] duration metric: took 400.423483ms for pod "kube-proxy-qpth7" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:44.933628    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tf79g" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:45.129188    3848 request.go:632] Waited for 195.397689ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tf79g
	I0815 16:32:45.129249    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tf79g
	I0815 16:32:45.129265    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:45.129278    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:45.129288    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:45.132523    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:45.329740    3848 request.go:632] Waited for 196.543831ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:32:45.329842    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:32:45.329853    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:45.329864    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:45.329893    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:45.332959    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:45.333655    3848 pod_ready.go:93] pod "kube-proxy-tf79g" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:45.333668    3848 pod_ready.go:82] duration metric: took 399.799233ms for pod "kube-proxy-tf79g" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:45.333677    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:45.528959    3848 request.go:632] Waited for 195.085989ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000
	I0815 16:32:45.528999    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000
	I0815 16:32:45.529004    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:45.529011    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:45.529014    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:45.531204    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:32:45.730380    3848 request.go:632] Waited for 198.71096ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:32:45.730470    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:32:45.730488    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:45.730540    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:45.730549    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:45.733632    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:45.734206    3848 pod_ready.go:93] pod "kube-scheduler-ha-138000" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:45.734218    3848 pod_ready.go:82] duration metric: took 400.300105ms for pod "kube-scheduler-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:45.734227    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:45.929618    3848 request.go:632] Waited for 195.186999ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000-m02
	I0815 16:32:45.929667    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000-m02
	I0815 16:32:45.929676    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:45.929687    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:45.929695    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:45.933262    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:46.130161    3848 request.go:632] Waited for 196.149607ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:32:46.130227    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:32:46.130233    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:46.130239    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:46.130243    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:46.132556    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:32:46.132872    3848 pod_ready.go:93] pod "kube-scheduler-ha-138000-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:46.132882    3848 pod_ready.go:82] duration metric: took 398.424946ms for pod "kube-scheduler-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:46.132892    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:46.330062    3848 request.go:632] Waited for 196.982598ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000-m03
	I0815 16:32:46.330155    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000-m03
	I0815 16:32:46.330165    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:46.330189    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:46.330198    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:46.333748    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:46.529626    3848 request.go:632] Waited for 195.297916ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:32:46.529687    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:32:46.529698    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:46.529709    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:46.529716    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:46.532896    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:46.533425    3848 pod_ready.go:93] pod "kube-scheduler-ha-138000-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:46.533437    3848 pod_ready.go:82] duration metric: took 400.316472ms for pod "kube-scheduler-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:46.533445    3848 pod_ready.go:39] duration metric: took 5.600601602s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 16:32:46.533458    3848 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 16:32:46.533512    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 16:32:46.545338    3848 system_svc.go:56] duration metric: took 11.868784ms WaitForService to wait for kubelet
	I0815 16:32:46.545353    3848 kubeadm.go:582] duration metric: took 6.321930293s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 16:32:46.545367    3848 node_conditions.go:102] verifying NodePressure condition ...
	I0815 16:32:46.729678    3848 request.go:632] Waited for 184.161888ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0815 16:32:46.729775    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0815 16:32:46.729791    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:46.729803    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:46.729814    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:46.733356    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:46.734408    3848 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 16:32:46.734417    3848 node_conditions.go:123] node cpu capacity is 2
	I0815 16:32:46.734438    3848 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 16:32:46.734446    3848 node_conditions.go:123] node cpu capacity is 2
	I0815 16:32:46.734451    3848 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 16:32:46.734454    3848 node_conditions.go:123] node cpu capacity is 2
	I0815 16:32:46.734459    3848 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 16:32:46.734463    3848 node_conditions.go:123] node cpu capacity is 2
	I0815 16:32:46.734466    3848 node_conditions.go:105] duration metric: took 188.991963ms to run NodePressure ...
	I0815 16:32:46.734473    3848 start.go:241] waiting for startup goroutines ...
	I0815 16:32:46.734487    3848 start.go:255] writing updated cluster config ...
	I0815 16:32:46.734849    3848 ssh_runner.go:195] Run: rm -f paused
	I0815 16:32:46.777324    3848 start.go:600] kubectl: 1.29.2, cluster: 1.31.0 (minor skew: 2)
	I0815 16:32:46.799308    3848 out.go:201] 
	W0815 16:32:46.820067    3848 out.go:270] ! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.0.
	I0815 16:32:46.840863    3848 out.go:177]   - Want kubectl v1.31.0? Try 'minikube kubectl -- get pods -A'
	I0815 16:32:46.862128    3848 out.go:177] * Done! kubectl is now configured to use "ha-138000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Aug 15 23:30:58 ha-138000 dockerd[1153]: time="2024-08-15T23:30:58.911495531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:30:58 ha-138000 dockerd[1153]: time="2024-08-15T23:30:58.913627850Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 15 23:30:58 ha-138000 dockerd[1153]: time="2024-08-15T23:30:58.913666039Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 15 23:30:58 ha-138000 dockerd[1153]: time="2024-08-15T23:30:58.913677629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:30:58 ha-138000 dockerd[1153]: time="2024-08-15T23:30:58.913771765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:30:58 ha-138000 dockerd[1153]: time="2024-08-15T23:30:58.917066694Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 15 23:30:58 ha-138000 dockerd[1153]: time="2024-08-15T23:30:58.917195390Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 15 23:30:58 ha-138000 dockerd[1153]: time="2024-08-15T23:30:58.917208298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:30:58 ha-138000 dockerd[1153]: time="2024-08-15T23:30:58.917385910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:30:59 ha-138000 dockerd[1153]: time="2024-08-15T23:30:59.886428053Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 15 23:30:59 ha-138000 dockerd[1153]: time="2024-08-15T23:30:59.886532806Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 15 23:30:59 ha-138000 dockerd[1153]: time="2024-08-15T23:30:59.886546833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:30:59 ha-138000 dockerd[1153]: time="2024-08-15T23:30:59.886748891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:31:00 ha-138000 dockerd[1153]: time="2024-08-15T23:31:00.892633352Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 15 23:31:00 ha-138000 dockerd[1153]: time="2024-08-15T23:31:00.893116347Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 15 23:31:00 ha-138000 dockerd[1153]: time="2024-08-15T23:31:00.893221469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:31:00 ha-138000 dockerd[1153]: time="2024-08-15T23:31:00.893411350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:31:01 ha-138000 dockerd[1153]: time="2024-08-15T23:31:01.876748430Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 15 23:31:01 ha-138000 dockerd[1153]: time="2024-08-15T23:31:01.876814366Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 15 23:31:01 ha-138000 dockerd[1153]: time="2024-08-15T23:31:01.876834716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:31:01 ha-138000 dockerd[1153]: time="2024-08-15T23:31:01.876961405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:31:03 ha-138000 dockerd[1153]: time="2024-08-15T23:31:03.874516614Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 15 23:31:03 ha-138000 dockerd[1153]: time="2024-08-15T23:31:03.874614005Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 15 23:31:03 ha-138000 dockerd[1153]: time="2024-08-15T23:31:03.874643416Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:31:03 ha-138000 dockerd[1153]: time="2024-08-15T23:31:03.874757663Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	f4a0ec142726f       045733566833c                                                                                         About a minute ago   Running             kube-controller-manager   7                   787273cdcffa4       kube-controller-manager-ha-138000
	9b4d9e684266a       cbb01a7bd410d                                                                                         About a minute ago   Running             coredns                   1                   e616bc4c74358       coredns-6f6b679f8f-dmgt5
	80f5762ff7596       8c811b4aec35f                                                                                         About a minute ago   Running             busybox                   1                   67d12a31b7b49       busybox-7dff88458-wgww9
	fea7f52d9a276       6e38f40d628db                                                                                         About a minute ago   Running             storage-provisioner       1                   b65d03e28df57       storage-provisioner
	a06770ea62d50       cbb01a7bd410d                                                                                         About a minute ago   Running             coredns                   1                   730316cfbee9c       coredns-6f6b679f8f-zc8jj
	3102e608c7d69       ad83b2ca7b09e                                                                                         About a minute ago   Running             kube-proxy                1                   824e79b38bfeb       kube-proxy-cznkn
	d35ee43272703       12968670680f4                                                                                         About a minute ago   Running             kindnet-cni               1                   28b2ff94764c2       kindnet-77dc6
	67b207257b40d       2e96e5913fc06                                                                                         2 minutes ago        Running             etcd                      3                   5fbdeb5e7a6b9       etcd-ha-138000
	c2ddb52a9846f       1766f54c897f0                                                                                         2 minutes ago        Running             kube-scheduler            2                   d5e3465359549       kube-scheduler-ha-138000
	2d2c6da6f7b74       38af8ddebf499                                                                                         2 minutes ago        Running             kube-vip                  1                   2bb58ad8c8f10       kube-vip-ha-138000
	2ed9ae0427266       045733566833c                                                                                         2 minutes ago        Exited              kube-controller-manager   6                   787273cdcffa4       kube-controller-manager-ha-138000
	a6baf6e21d6c9       604f5db92eaa8                                                                                         2 minutes ago        Running             kube-apiserver            6                   0de6d71d60938       kube-apiserver-ha-138000
	5ed11c46e0eb7       604f5db92eaa8                                                                                         3 minutes ago        Exited              kube-apiserver            5                   7152268f8eec4       kube-apiserver-ha-138000
	59dac0b44544a       2e96e5913fc06                                                                                         3 minutes ago        Exited              etcd                      2                   ec285d4826baa       etcd-ha-138000
	efbc09be8eda5       38af8ddebf499                                                                                         7 minutes ago        Exited              kube-vip                  0                   0c665afd15e6f       kube-vip-ha-138000
	ac6935271595c       1766f54c897f0                                                                                         7 minutes ago        Exited              kube-scheduler            1                   07c1c62e41d3a       kube-scheduler-ha-138000
	8f20284cd3969       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   10 minutes ago       Exited              busybox                   0                   bfc975a528b9e       busybox-7dff88458-wgww9
	42f5d82b00417       cbb01a7bd410d                                                                                         12 minutes ago       Exited              coredns                   0                   10891f8fbffcc       coredns-6f6b679f8f-dmgt5
	3e8b806ef4f33       cbb01a7bd410d                                                                                         12 minutes ago       Exited              coredns                   0                   096ab15603b01       coredns-6f6b679f8f-zc8jj
	6a1122913bb18       6e38f40d628db                                                                                         12 minutes ago       Exited              storage-provisioner       0                   e30dde4a5a10d       storage-provisioner
	c2a16126718b3       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              13 minutes ago       Exited              kindnet-cni               0                   e260a94a203af       kindnet-77dc6
	fc2e141007efb       ad83b2ca7b09e                                                                                         13 minutes ago       Exited              kube-proxy                0                   5b40cdd6b2c24       kube-proxy-cznkn
	
	
	==> coredns [3e8b806ef4f3] <==
	[INFO] 10.244.2.2:44773 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000075522s
	[INFO] 10.244.2.2:53805 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000098349s
	[INFO] 10.244.2.2:34369 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000122495s
	[INFO] 10.244.0.4:59671 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000077646s
	[INFO] 10.244.0.4:41185 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000079139s
	[INFO] 10.244.0.4:42405 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000092065s
	[INFO] 10.244.0.4:54373 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000049998s
	[INFO] 10.244.0.4:57169 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000050383s
	[INFO] 10.244.0.4:37825 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000085108s
	[INFO] 10.244.1.2:59685 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000072268s
	[INFO] 10.244.1.2:32923 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073054s
	[INFO] 10.244.2.2:50876 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000068102s
	[INFO] 10.244.2.2:54719 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000762s
	[INFO] 10.244.0.4:57395 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000091608s
	[INFO] 10.244.0.4:37936 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000031052s
	[INFO] 10.244.1.2:58408 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000088888s
	[INFO] 10.244.1.2:42731 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000114857s
	[INFO] 10.244.1.2:41638 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000082664s
	[INFO] 10.244.2.2:52666 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000092331s
	[INFO] 10.244.2.2:41501 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000093116s
	[INFO] 10.244.0.4:48200 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000075447s
	[INFO] 10.244.0.4:35056 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000091854s
	[INFO] 10.244.0.4:36257 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000057922s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [42f5d82b0041] <==
	[INFO] 10.244.1.2:50104 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.009876264s
	[INFO] 10.244.0.4:33653 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115506s
	[INFO] 10.244.0.4:45180 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000042438s
	[INFO] 10.244.1.2:60312 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000068925s
	[INFO] 10.244.1.2:38521 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000124425s
	[INFO] 10.244.1.2:51675 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000125646s
	[INFO] 10.244.1.2:33974 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000078827s
	[INFO] 10.244.2.2:38966 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000078816s
	[INFO] 10.244.2.2:56056 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000620092s
	[INFO] 10.244.2.2:32787 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000109221s
	[INFO] 10.244.2.2:55701 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000039601s
	[INFO] 10.244.0.4:52543 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000083971s
	[INFO] 10.244.0.4:55050 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000146353s
	[INFO] 10.244.1.2:52165 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100415s
	[INFO] 10.244.1.2:41123 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000060755s
	[INFO] 10.244.2.2:56460 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000087503s
	[INFO] 10.244.2.2:36407 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00009778s
	[INFO] 10.244.0.4:40764 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000037536s
	[INFO] 10.244.0.4:58473 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000029335s
	[INFO] 10.244.1.2:38640 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000118481s
	[INFO] 10.244.2.2:46151 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000117088s
	[INFO] 10.244.2.2:34054 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000108858s
	[INFO] 10.244.0.4:56735 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000069666s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9b4d9e684266] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:35767 - 22561 "HINFO IN 7004530829965964013.1750022571380345519. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.015451267s
	
	
	==> coredns [a06770ea62d5] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:45363 - 12851 "HINFO IN 3106403090745602942.3481725171230015744. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.010450605s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[254954895]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (15-Aug-2024 23:30:59.263) (total time: 30001ms):
	Trace[254954895]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (23:31:29.264)
	Trace[254954895]: [30.001669104s] [30.001669104s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1581349608]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (15-Aug-2024 23:30:59.262) (total time: 30003ms):
	Trace[1581349608]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (23:31:29.264)
	Trace[1581349608]: [30.003336626s] [30.003336626s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[405473182]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (15-Aug-2024 23:30:59.265) (total time: 30001ms):
	Trace[405473182]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (23:31:29.266)
	Trace[405473182]: [30.001211712s] [30.001211712s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               ha-138000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-138000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774
	                    minikube.k8s.io/name=ha-138000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_15T16_19_29_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 23:19:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-138000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 23:32:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 23:30:42 +0000   Thu, 15 Aug 2024 23:19:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 23:30:42 +0000   Thu, 15 Aug 2024 23:19:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 23:30:42 +0000   Thu, 15 Aug 2024 23:19:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 23:30:42 +0000   Thu, 15 Aug 2024 23:30:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-138000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 92a77083c2c148ceb3a6c27974611a44
	  System UUID:                bf1b4c04-0000-0000-a028-0dd0a6dcd337
	  Boot ID:                    0c496489-3552-4f3e-814f-62743ebab1dd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-wgww9              0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-6f6b679f8f-dmgt5             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 coredns-6f6b679f8f-zc8jj             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 etcd-ha-138000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         13m
	  kube-system                 kindnet-77dc6                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-apiserver-ha-138000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-138000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-cznkn                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-138000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-138000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 13m                    kube-proxy       
	  Normal  Starting                 109s                   kube-proxy       
	  Normal  Starting                 13m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m                    kubelet          Node ha-138000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                    kubelet          Node ha-138000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                    kubelet          Node ha-138000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                    node-controller  Node ha-138000 event: Registered Node ha-138000 in Controller
	  Normal  NodeReady                12m                    kubelet          Node ha-138000 status is now: NodeReady
	  Normal  RegisteredNode           12m                    node-controller  Node ha-138000 event: Registered Node ha-138000 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-138000 event: Registered Node ha-138000 in Controller
	  Normal  RegisteredNode           8m58s                  node-controller  Node ha-138000 event: Registered Node ha-138000 in Controller
	  Normal  NodeHasSufficientMemory  2m37s (x8 over 2m37s)  kubelet          Node ha-138000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m37s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    2m37s (x8 over 2m37s)  kubelet          Node ha-138000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m37s (x7 over 2m37s)  kubelet          Node ha-138000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m37s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m5s                   node-controller  Node ha-138000 event: Registered Node ha-138000 in Controller
	  Normal  RegisteredNode           103s                   node-controller  Node ha-138000 event: Registered Node ha-138000 in Controller
	  Normal  RegisteredNode           65s                    node-controller  Node ha-138000 event: Registered Node ha-138000 in Controller
	
	
	Name:               ha-138000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-138000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774
	                    minikube.k8s.io/name=ha-138000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_15T16_20_25_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 23:20:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-138000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 23:32:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 23:30:40 +0000   Thu, 15 Aug 2024 23:20:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 23:30:40 +0000   Thu, 15 Aug 2024 23:20:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 23:30:40 +0000   Thu, 15 Aug 2024 23:20:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 23:30:40 +0000   Thu, 15 Aug 2024 23:20:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.6
	  Hostname:    ha-138000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 f9fb9b8d5e3646d78c1f55449a26b188
	  System UUID:                4cff4215-0000-0000-9139-05f05b79bce3
	  Boot ID:                    26a8e1bf-75d0-4caa-b86c-d0e6f8c9e474
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-s6zqd                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-138000-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         12m
	  kube-system                 kindnet-z6mnx                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-apiserver-ha-138000-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-138000-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-tf79g                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-138000-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-138000-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m7s                   kube-proxy       
	  Normal   Starting                 9m1s                   kube-proxy       
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-138000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-138000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node ha-138000-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                    node-controller  Node ha-138000-m02 event: Registered Node ha-138000-m02 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-138000-m02 event: Registered Node ha-138000-m02 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-138000-m02 event: Registered Node ha-138000-m02 in Controller
	  Normal   Starting                 9m7s                   kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9m7s                   kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 9m6s                   kubelet          Node ha-138000-m02 has been rebooted, boot id: 8d4ef345-e3b6-437d-95f7-338233576a37
	  Normal   NodeHasSufficientMemory  9m6s                   kubelet          Node ha-138000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m6s                   kubelet          Node ha-138000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m6s                   kubelet          Node ha-138000-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           8m58s                  node-controller  Node ha-138000-m02 event: Registered Node ha-138000-m02 in Controller
	  Normal   Starting                 2m18s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m17s (x8 over 2m18s)  kubelet          Node ha-138000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m17s (x8 over 2m18s)  kubelet          Node ha-138000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m17s (x7 over 2m18s)  kubelet          Node ha-138000-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m5s                   node-controller  Node ha-138000-m02 event: Registered Node ha-138000-m02 in Controller
	  Normal   RegisteredNode           103s                   node-controller  Node ha-138000-m02 event: Registered Node ha-138000-m02 in Controller
	  Normal   RegisteredNode           65s                    node-controller  Node ha-138000-m02 event: Registered Node ha-138000-m02 in Controller
	
	
	Name:               ha-138000-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-138000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774
	                    minikube.k8s.io/name=ha-138000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_15T16_21_41_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 23:21:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-138000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 23:32:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 23:31:37 +0000   Thu, 15 Aug 2024 23:31:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 23:31:37 +0000   Thu, 15 Aug 2024 23:31:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 23:31:37 +0000   Thu, 15 Aug 2024 23:31:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 23:31:37 +0000   Thu, 15 Aug 2024 23:31:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.7
	  Hostname:    ha-138000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 a589cb93968b432caa5fc365bb995740
	  System UUID:                42284b8b-0000-0000-ac7c-129bf380703a
	  Boot ID:                    3cf0bc98-5f0e-4a33-80fb-e0c2d84cf3db
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-t5sdh                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-138000-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         11m
	  kube-system                 kindnet-dsvxt                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      11m
	  kube-system                 kube-apiserver-ha-138000-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-138000-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-kxghx                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-138000-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-138000-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 68s                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-138000-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-138000-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-138000-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                node-controller  Node ha-138000-m03 event: Registered Node ha-138000-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-138000-m03 event: Registered Node ha-138000-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-138000-m03 event: Registered Node ha-138000-m03 in Controller
	  Normal   RegisteredNode           8m58s              node-controller  Node ha-138000-m03 event: Registered Node ha-138000-m03 in Controller
	  Normal   RegisteredNode           2m5s               node-controller  Node ha-138000-m03 event: Registered Node ha-138000-m03 in Controller
	  Normal   RegisteredNode           103s               node-controller  Node ha-138000-m03 event: Registered Node ha-138000-m03 in Controller
	  Normal   NodeNotReady             85s                node-controller  Node ha-138000-m03 status is now: NodeNotReady
	  Normal   Starting                 72s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  72s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  72s (x3 over 72s)  kubelet          Node ha-138000-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    72s (x3 over 72s)  kubelet          Node ha-138000-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     72s (x3 over 72s)  kubelet          Node ha-138000-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 72s (x2 over 72s)  kubelet          Node ha-138000-m03 has been rebooted, boot id: 3cf0bc98-5f0e-4a33-80fb-e0c2d84cf3db
	  Normal   NodeReady                72s (x2 over 72s)  kubelet          Node ha-138000-m03 status is now: NodeReady
	  Normal   RegisteredNode           65s                node-controller  Node ha-138000-m03 event: Registered Node ha-138000-m03 in Controller
	
	
	Name:               ha-138000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-138000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774
	                    minikube.k8s.io/name=ha-138000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_15T16_22_36_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 23:22:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-138000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 23:32:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 23:32:40 +0000   Thu, 15 Aug 2024 23:32:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 23:32:40 +0000   Thu, 15 Aug 2024 23:32:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 23:32:40 +0000   Thu, 15 Aug 2024 23:32:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 23:32:40 +0000   Thu, 15 Aug 2024 23:32:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.8
	  Hostname:    ha-138000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 f4edcad8d76a442b9919d65bbd5ebb03
	  System UUID:                e49846a0-0000-0000-a846-8a8b2da04ea9
	  Boot ID:                    7d49d130-2f84-43a9-9c3e-7a69f44367c4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-m887r       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-proxy-qpth7    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 7s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-138000-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-138000-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-138000-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-138000-m04 event: Registered Node ha-138000-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-138000-m04 event: Registered Node ha-138000-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-138000-m04 event: Registered Node ha-138000-m04 in Controller
	  Normal   NodeReady                9m52s              kubelet          Node ha-138000-m04 status is now: NodeReady
	  Normal   RegisteredNode           8m58s              node-controller  Node ha-138000-m04 event: Registered Node ha-138000-m04 in Controller
	  Normal   RegisteredNode           2m5s               node-controller  Node ha-138000-m04 event: Registered Node ha-138000-m04 in Controller
	  Normal   RegisteredNode           103s               node-controller  Node ha-138000-m04 event: Registered Node ha-138000-m04 in Controller
	  Normal   NodeNotReady             85s                node-controller  Node ha-138000-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           65s                node-controller  Node ha-138000-m04 event: Registered Node ha-138000-m04 in Controller
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 9s (x3 over 9s)    kubelet          Node ha-138000-m04 has been rebooted, boot id: 7d49d130-2f84-43a9-9c3e-7a69f44367c4
	  Normal   NodeHasSufficientMemory  9s (x4 over 9s)    kubelet          Node ha-138000-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s (x4 over 9s)    kubelet          Node ha-138000-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s (x4 over 9s)    kubelet          Node ha-138000-m04 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             9s                 kubelet          Node ha-138000-m04 status is now: NodeNotReady
	  Normal   NodeReady                9s (x2 over 9s)    kubelet          Node ha-138000-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.035773] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.007968] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.680855] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000003] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.006866] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Aug15 23:30] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.162045] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.989029] systemd-fstab-generator[468]: Ignoring "noauto" option for root device
	[  +0.101466] systemd-fstab-generator[487]: Ignoring "noauto" option for root device
	[  +1.930620] systemd-fstab-generator[1017]: Ignoring "noauto" option for root device
	[  +0.060770] kauditd_printk_skb: 79 callbacks suppressed
	[  +0.229646] systemd-fstab-generator[1112]: Ignoring "noauto" option for root device
	[  +0.119765] systemd-fstab-generator[1124]: Ignoring "noauto" option for root device
	[  +0.123401] systemd-fstab-generator[1138]: Ignoring "noauto" option for root device
	[  +2.409334] systemd-fstab-generator[1357]: Ignoring "noauto" option for root device
	[  +0.114639] systemd-fstab-generator[1369]: Ignoring "noauto" option for root device
	[  +0.103538] systemd-fstab-generator[1381]: Ignoring "noauto" option for root device
	[  +0.135144] systemd-fstab-generator[1396]: Ignoring "noauto" option for root device
	[  +0.456371] systemd-fstab-generator[1560]: Ignoring "noauto" option for root device
	[  +6.803779] kauditd_printk_skb: 234 callbacks suppressed
	[ +21.488008] kauditd_printk_skb: 40 callbacks suppressed
	[ +18.019929] kauditd_printk_skb: 21 callbacks suppressed
	[Aug15 23:31] kauditd_printk_skb: 55 callbacks suppressed
	
	
	==> etcd [59dac0b44544] <==
	{"level":"info","ts":"2024-08-15T23:29:46.384063Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1864] sent MsgPreVote request to 2399e7dba5b18dfe at term 2"}
	{"level":"info","ts":"2024-08-15T23:29:46.384495Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1864] sent MsgPreVote request to c8daa22dc1df7d56 at term 2"}
	{"level":"warn","ts":"2024-08-15T23:29:46.408477Z","caller":"etcdserver/server.go:2139","msg":"failed to publish local member to cluster through raft","local-member-id":"b8c6c7563d17d844","local-member-attributes":"{Name:ha-138000 ClientURLs:[https://192.169.0.5:2379]}","request-path":"/0/members/b8c6c7563d17d844/attributes","publish-timeout":"7s","error":"etcdserver: request timed out"}
	{"level":"warn","ts":"2024-08-15T23:29:46.415071Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"2399e7dba5b18dfe","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T23:29:46.415120Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"2399e7dba5b18dfe","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T23:29:46.419833Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c8daa22dc1df7d56","rtt":"0s","error":"dial tcp 192.169.0.7:2380: i/o timeout"}
	{"level":"warn","ts":"2024-08-15T23:29:46.419980Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"c8daa22dc1df7d56","rtt":"0s","error":"dial tcp 192.169.0.7:2380: i/o timeout"}
	{"level":"warn","ts":"2024-08-15T23:29:46.732045Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740419357201672,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-15T23:29:47.233019Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740419357201672,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-08-15T23:29:47.382392Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-15T23:29:47.382847Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-15T23:29:47.383118Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-08-15T23:29:47.383277Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1864] sent MsgPreVote request to 2399e7dba5b18dfe at term 2"}
	{"level":"info","ts":"2024-08-15T23:29:47.383307Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1864] sent MsgPreVote request to c8daa22dc1df7d56 at term 2"}
	{"level":"warn","ts":"2024-08-15T23:29:47.734052Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740419357201672,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-15T23:29:48.244565Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740419357201672,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-08-15T23:29:48.381923Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-15T23:29:48.382007Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-15T23:29:48.382026Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-08-15T23:29:48.382045Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1864] sent MsgPreVote request to 2399e7dba5b18dfe at term 2"}
	{"level":"info","ts":"2024-08-15T23:29:48.382066Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1864] sent MsgPreVote request to c8daa22dc1df7d56 at term 2"}
	{"level":"warn","ts":"2024-08-15T23:29:48.745537Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740419357201672,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-15T23:29:49.013739Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"4.788785781s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-08-15T23:29:49.013790Z","caller":"traceutil/trace.go:171","msg":"trace[283476530] range","detail":"{range_begin:; range_end:; }","duration":"4.78884981s","start":"2024-08-15T23:29:44.224933Z","end":"2024-08-15T23:29:49.013782Z","steps":["trace[283476530] 'agreement among raft nodes before linearized reading'  (duration: 4.788783568s)"],"step_count":1}
	{"level":"error","ts":"2024-08-15T23:29:49.013846Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[-]linearizable_read failed: context canceled\n[+]data_corruption ok\n[+]serializable_read ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	
	
	==> etcd [67b207257b40] <==
	{"level":"warn","ts":"2024-08-15T23:31:28.335171Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b8c6c7563d17d844","from":"b8c6c7563d17d844","remote-peer-id":"c8daa22dc1df7d56","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:31:28.340125Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b8c6c7563d17d844","from":"b8c6c7563d17d844","remote-peer-id":"c8daa22dc1df7d56","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:31:28.341428Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b8c6c7563d17d844","from":"b8c6c7563d17d844","remote-peer-id":"c8daa22dc1df7d56","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:31:28.354371Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b8c6c7563d17d844","from":"b8c6c7563d17d844","remote-peer-id":"c8daa22dc1df7d56","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:31:28.454431Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b8c6c7563d17d844","from":"b8c6c7563d17d844","remote-peer-id":"c8daa22dc1df7d56","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:31:28.554698Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b8c6c7563d17d844","from":"b8c6c7563d17d844","remote-peer-id":"c8daa22dc1df7d56","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:31:28.659002Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b8c6c7563d17d844","from":"b8c6c7563d17d844","remote-peer-id":"c8daa22dc1df7d56","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:31:28.754861Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b8c6c7563d17d844","from":"b8c6c7563d17d844","remote-peer-id":"c8daa22dc1df7d56","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:31:28.856395Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b8c6c7563d17d844","from":"b8c6c7563d17d844","remote-peer-id":"c8daa22dc1df7d56","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:31:30.243311Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c8daa22dc1df7d56","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T23:31:30.243321Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"c8daa22dc1df7d56","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T23:31:31.422701Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.7:2380/version","remote-member-id":"c8daa22dc1df7d56","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T23:31:31.422810Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"c8daa22dc1df7d56","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T23:31:35.244337Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c8daa22dc1df7d56","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T23:31:35.244614Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"c8daa22dc1df7d56","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T23:31:35.424045Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.7:2380/version","remote-member-id":"c8daa22dc1df7d56","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T23:31:35.424130Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"c8daa22dc1df7d56","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"info","ts":"2024-08-15T23:31:38.649697Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"c8daa22dc1df7d56"}
	{"level":"info","ts":"2024-08-15T23:31:38.665692Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c8daa22dc1df7d56"}
	{"level":"info","ts":"2024-08-15T23:31:38.730507Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c8daa22dc1df7d56"}
	{"level":"info","ts":"2024-08-15T23:31:38.835070Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"c8daa22dc1df7d56","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-15T23:31:38.835279Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c8daa22dc1df7d56"}
	{"level":"info","ts":"2024-08-15T23:31:38.864581Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"c8daa22dc1df7d56","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-15T23:31:38.864626Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c8daa22dc1df7d56"}
	{"level":"warn","ts":"2024-08-15T23:31:40.245395Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c8daa22dc1df7d56","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	
	
	==> kernel <==
	 23:32:50 up 2 min,  0 users,  load average: 0.19, 0.19, 0.08
	Linux ha-138000 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [c2a16126718b] <==
	I0815 23:23:47.704130       1 main.go:322] Node ha-138000-m04 has CIDR [10.244.3.0/24] 
	I0815 23:23:57.712115       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0815 23:23:57.712139       1 main.go:299] handling current node
	I0815 23:23:57.712152       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0815 23:23:57.712157       1 main.go:322] Node ha-138000-m02 has CIDR [10.244.1.0/24] 
	I0815 23:23:57.712420       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0815 23:23:57.712543       1 main.go:322] Node ha-138000-m03 has CIDR [10.244.2.0/24] 
	I0815 23:23:57.712720       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0815 23:23:57.712823       1 main.go:322] Node ha-138000-m04 has CIDR [10.244.3.0/24] 
	I0815 23:24:07.712424       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0815 23:24:07.712474       1 main.go:299] handling current node
	I0815 23:24:07.712488       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0815 23:24:07.712494       1 main.go:322] Node ha-138000-m02 has CIDR [10.244.1.0/24] 
	I0815 23:24:07.712623       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0815 23:24:07.712704       1 main.go:322] Node ha-138000-m03 has CIDR [10.244.2.0/24] 
	I0815 23:24:07.712814       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0815 23:24:07.712851       1 main.go:322] Node ha-138000-m04 has CIDR [10.244.3.0/24] 
	I0815 23:24:17.705680       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0815 23:24:17.705716       1 main.go:322] Node ha-138000-m02 has CIDR [10.244.1.0/24] 
	I0815 23:24:17.706225       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0815 23:24:17.706282       1 main.go:322] Node ha-138000-m03 has CIDR [10.244.2.0/24] 
	I0815 23:24:17.706514       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0815 23:24:17.706582       1 main.go:322] Node ha-138000-m04 has CIDR [10.244.3.0/24] 
	I0815 23:24:17.706957       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0815 23:24:17.707108       1 main.go:299] handling current node
	
	
	==> kindnet [d35ee4327270] <==
	I0815 23:32:20.106395       1 main.go:299] handling current node
	I0815 23:32:30.109647       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0815 23:32:30.109819       1 main.go:299] handling current node
	I0815 23:32:30.110039       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0815 23:32:30.110182       1 main.go:322] Node ha-138000-m02 has CIDR [10.244.1.0/24] 
	I0815 23:32:30.110510       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0815 23:32:30.110595       1 main.go:322] Node ha-138000-m03 has CIDR [10.244.2.0/24] 
	I0815 23:32:30.110933       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0815 23:32:30.111022       1 main.go:322] Node ha-138000-m04 has CIDR [10.244.3.0/24] 
	I0815 23:32:40.105535       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0815 23:32:40.105612       1 main.go:322] Node ha-138000-m03 has CIDR [10.244.2.0/24] 
	I0815 23:32:40.105819       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0815 23:32:40.105982       1 main.go:322] Node ha-138000-m04 has CIDR [10.244.3.0/24] 
	I0815 23:32:40.106120       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0815 23:32:40.106193       1 main.go:299] handling current node
	I0815 23:32:40.106215       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0815 23:32:40.106293       1 main.go:322] Node ha-138000-m02 has CIDR [10.244.1.0/24] 
	I0815 23:32:50.106155       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0815 23:32:50.106270       1 main.go:299] handling current node
	I0815 23:32:50.106298       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0815 23:32:50.106308       1 main.go:322] Node ha-138000-m02 has CIDR [10.244.1.0/24] 
	I0815 23:32:50.106511       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0815 23:32:50.106560       1 main.go:322] Node ha-138000-m03 has CIDR [10.244.2.0/24] 
	I0815 23:32:50.106643       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0815 23:32:50.106701       1 main.go:322] Node ha-138000-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [5ed11c46e0eb] <==
	I0815 23:29:32.056397       1 options.go:228] external host was not specified, using 192.169.0.5
	I0815 23:29:32.057840       1 server.go:142] Version: v1.31.0
	I0815 23:29:32.057961       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 23:29:32.445995       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0815 23:29:32.449536       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0815 23:29:32.452083       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0815 23:29:32.452114       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0815 23:29:32.452276       1 instance.go:232] Using reconciler: lease
	W0815 23:29:49.041556       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:33594->127.0.0.1:2379: read: connection reset by peer"
	W0815 23:29:49.041696       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:33564->127.0.0.1:2379: read: connection reset by peer"
	W0815 23:29:49.041767       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:33580->127.0.0.1:2379: read: connection reset by peer"
	W0815 23:29:50.044022       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 23:29:50.044031       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 23:29:50.044267       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 23:29:51.372028       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 23:29:51.388445       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 23:29:51.855782       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0815 23:29:52.453885       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [a6baf6e21d6c] <==
	I0815 23:30:40.344140       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0815 23:30:40.344259       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0815 23:30:40.418768       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0815 23:30:40.419548       1 shared_informer.go:320] Caches are synced for configmaps
	I0815 23:30:40.420315       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0815 23:30:40.420931       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0815 23:30:40.424034       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0815 23:30:40.424129       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0815 23:30:40.424470       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0815 23:30:40.424883       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0815 23:30:40.425391       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0815 23:30:40.425745       1 aggregator.go:171] initial CRD sync complete...
	I0815 23:30:40.425776       1 autoregister_controller.go:144] Starting autoregister controller
	I0815 23:30:40.425782       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0815 23:30:40.425786       1 cache.go:39] Caches are synced for autoregister controller
	I0815 23:30:40.429758       1 shared_informer.go:320] Caches are synced for node_authorizer
	W0815 23:30:40.433000       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.6]
	I0815 23:30:40.451364       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0815 23:30:40.451641       1 policy_source.go:224] refreshing policies
	I0815 23:30:40.467536       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0815 23:30:40.536982       1 controller.go:615] quota admission added evaluator for: endpoints
	I0815 23:30:40.548680       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0815 23:30:40.556609       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0815 23:30:41.331073       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0815 23:30:41.666666       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.5]
	
	
	==> kube-controller-manager [2ed9ae042726] <==
	I0815 23:30:20.677986       1 serving.go:386] Generated self-signed cert in-memory
	I0815 23:30:20.928931       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0815 23:30:20.928987       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 23:30:20.930507       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0815 23:30:20.930593       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0815 23:30:20.931118       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0815 23:30:20.931317       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0815 23:30:40.940723       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-controller-manager [f4a0ec142726] <==
	I0815 23:31:24.544150       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m04"
	I0815 23:31:24.554575       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m03"
	I0815 23:31:24.555197       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m04"
	I0815 23:31:24.606976       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="43.863592ms"
	I0815 23:31:24.607212       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="136.494µs"
	I0815 23:31:26.811428       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m04"
	I0815 23:31:29.784539       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m03"
	I0815 23:31:34.097680       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="7.208476ms"
	I0815 23:31:34.098036       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="54.359µs"
	I0815 23:31:34.111201       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-g7wk5 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-g7wk5\": the object has been modified; please apply your changes to the latest version and try again"
	I0815 23:31:34.111585       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"0bde8909-370a-4104-803d-243eecab8628", APIVersion:"v1", ResourceVersion:"258", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-g7wk5 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-g7wk5": the object has been modified; please apply your changes to the latest version and try again
	I0815 23:31:36.890364       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m03"
	I0815 23:31:37.466268       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m03"
	I0815 23:31:37.479273       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m03"
	I0815 23:31:38.389032       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="35.893µs"
	I0815 23:31:39.577414       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="29.216738ms"
	I0815 23:31:39.577503       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="38.266µs"
	I0815 23:31:39.708978       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m03"
	I0815 23:31:39.869802       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m04"
	I0815 23:31:44.491193       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m04"
	I0815 23:31:44.581910       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m04"
	I0815 23:32:40.799384       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-138000-m04"
	I0815 23:32:40.799635       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m04"
	I0815 23:32:40.809568       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m04"
	I0815 23:32:41.795116       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m04"
	
	
	==> kube-proxy [3102e608c7d6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 23:30:59.351348       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0815 23:30:59.378221       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0815 23:30:59.378378       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 23:30:59.417171       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0815 23:30:59.417213       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0815 23:30:59.417230       1 server_linux.go:169] "Using iptables Proxier"
	I0815 23:30:59.420831       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 23:30:59.421491       1 server.go:483] "Version info" version="v1.31.0"
	I0815 23:30:59.421522       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 23:30:59.424760       1 config.go:197] "Starting service config controller"
	I0815 23:30:59.425626       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 23:30:59.426090       1 config.go:104] "Starting endpoint slice config controller"
	I0815 23:30:59.426116       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 23:30:59.427803       1 config.go:326] "Starting node config controller"
	I0815 23:30:59.428510       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 23:30:59.526834       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0815 23:30:59.526859       1 shared_informer.go:320] Caches are synced for service config
	I0815 23:30:59.528661       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [fc2e141007ef] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 23:19:33.922056       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0815 23:19:33.939645       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0815 23:19:33.939881       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 23:19:33.966815       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0815 23:19:33.966963       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0815 23:19:33.967061       1 server_linux.go:169] "Using iptables Proxier"
	I0815 23:19:33.969119       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 23:19:33.969437       1 server.go:483] "Version info" version="v1.31.0"
	I0815 23:19:33.969466       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 23:19:33.970289       1 config.go:197] "Starting service config controller"
	I0815 23:19:33.970403       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 23:19:33.970441       1 config.go:104] "Starting endpoint slice config controller"
	I0815 23:19:33.970446       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 23:19:33.970870       1 config.go:326] "Starting node config controller"
	I0815 23:19:33.970895       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 23:19:34.070930       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0815 23:19:34.070930       1 shared_informer.go:320] Caches are synced for node config
	I0815 23:19:34.070944       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [ac6935271595] <==
	W0815 23:29:03.654257       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.169.0.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0815 23:29:03.654675       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.169.0.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0815 23:29:04.192220       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0815 23:29:04.192311       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0815 23:29:07.683875       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0815 23:29:07.683942       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0815 23:29:07.708489       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.169.0.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0815 23:29:07.708791       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.169.0.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0815 23:29:17.257133       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0815 23:29:17.257240       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0815 23:29:26.626316       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0815 23:29:26.626443       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0815 23:29:29.967116       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.169.0.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0815 23:29:29.967155       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.169.0.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0815 23:29:42.147720       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.169.0.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	E0815 23:29:42.148149       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.169.0.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError"
	W0815 23:29:43.616204       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	E0815 23:29:43.616440       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError"
	W0815 23:29:45.922991       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	E0815 23:29:45.923106       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError"
	E0815 23:29:49.027901       1 server.go:267] "waiting for handlers to sync" err="context canceled"
	I0815 23:29:49.028326       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0815 23:29:49.028478       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0815 23:29:49.028500       1 shared_informer.go:316] "Unhandled Error" err="unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file" logger="UnhandledError"
	E0815 23:29:49.029058       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [c2ddb52a9846] <==
	I0815 23:30:20.706878       1 serving.go:386] Generated self-signed cert in-memory
	W0815 23:30:31.075526       1 authentication.go:370] Error looking up in-cluster authentication configuration: Get "https://192.169.0.5:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W0815 23:30:31.075552       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0815 23:30:31.075556       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0815 23:30:40.370669       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0815 23:30:40.370712       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 23:30:40.375435       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0815 23:30:40.379182       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0815 23:30:40.379313       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0815 23:30:40.379473       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0815 23:30:40.480276       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 15 23:30:46 ha-138000 kubelet[1567]: E0815 23:30:46.668035    1567 kuberuntime_manager.go:1272] "Unhandled Error" err="container &Container{Name:coredns,Image:registry.k8s.io/coredns/coredns:v1.11.1,Command:[],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:dns,HostPort:0,ContainerPort:53,Protocol:UDP,HostIP:,},ContainerPort{Name:dns-tcp,HostPort:0,ContainerPort:53,Protocol:TCP,HostIP:,},ContainerPort{Name:metrics,HostPort:0,ContainerPort:9153,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{memory: {{178257920 0} {<nil>} 170Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{73400320 0} {<nil>} 70Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8bnpm,ReadOnly:true,MountPath:/var/run/secrets/kubern
etes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8181 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunA
sGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod coredns-6f6b679f8f-dmgt5_kube-system(47d73953-ec2c-4f17-b2b8-d6a9b5e5a316): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Aug 15 23:30:46 ha-138000 kubelet[1567]: E0815 23:30:46.668170    1567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="default/busybox-7dff88458-wgww9" podUID="b8eb799e-e761-4647-8aae-388c38bc936e"
	Aug 15 23:30:46 ha-138000 kubelet[1567]: E0815 23:30:46.669336    1567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-6f6b679f8f-dmgt5" podUID="47d73953-ec2c-4f17-b2b8-d6a9b5e5a316"
	Aug 15 23:30:46 ha-138000 kubelet[1567]: E0815 23:30:46.669381    1567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-6f6b679f8f-zc8jj" podUID="b4a9df39-b09d-4bc3-97f6-b3176ff8e842"
	Aug 15 23:30:46 ha-138000 kubelet[1567]: E0815 23:30:46.669395    1567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kube-proxy-cznkn" podUID="61cd1a9a-80fb-4d0e-a4dd-f5ed2d6ddc0f"
	Aug 15 23:30:49 ha-138000 kubelet[1567]: I0815 23:30:49.205357    1567 scope.go:117] "RemoveContainer" containerID="2ed9ae04272666896274c0cc9cbac7e240c18a02b0b35eaab975e10a79d1a635"
	Aug 15 23:30:49 ha-138000 kubelet[1567]: E0815 23:30:49.205497    1567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-138000_kube-system(ed196a03081880609aebd781f662c0b9)\"" pod="kube-system/kube-controller-manager-ha-138000" podUID="ed196a03081880609aebd781f662c0b9"
	Aug 15 23:30:58 ha-138000 kubelet[1567]: I0815 23:30:58.825605    1567 scope.go:117] "RemoveContainer" containerID="3e8b806ef4f33fe0f0fca48027df27c689fecf6b07621dedc5ef13adcc0374c3"
	Aug 15 23:30:58 ha-138000 kubelet[1567]: I0815 23:30:58.827210    1567 scope.go:117] "RemoveContainer" containerID="fc2e141007efbc5a944ce056112991ed717c9f8dc75269aa7a0eac8f8dde6098"
	Aug 15 23:30:58 ha-138000 kubelet[1567]: I0815 23:30:58.827640    1567 scope.go:117] "RemoveContainer" containerID="c2a16126718b32a024e2d52492029acb6291ffb8595d909499955382a9b4b0d1"
	Aug 15 23:30:59 ha-138000 kubelet[1567]: I0815 23:30:59.824309    1567 scope.go:117] "RemoveContainer" containerID="6a1122913bb1811dd9cfff9fde8c221a2c969f80db1f0bcc1a66f58faaa88395"
	Aug 15 23:31:00 ha-138000 kubelet[1567]: I0815 23:31:00.825729    1567 scope.go:117] "RemoveContainer" containerID="8f20284cd3969cd69aa4dd7eb37b8d05c7df4f53aa8c6f636949fd401174eba1"
	Aug 15 23:31:01 ha-138000 kubelet[1567]: I0815 23:31:01.825360    1567 scope.go:117] "RemoveContainer" containerID="42f5d82b004174c93ffa1441e156ff5ca6d23b9457598805927d06b8823a41bd"
	Aug 15 23:31:03 ha-138000 kubelet[1567]: I0815 23:31:03.825285    1567 scope.go:117] "RemoveContainer" containerID="2ed9ae04272666896274c0cc9cbac7e240c18a02b0b35eaab975e10a79d1a635"
	Aug 15 23:31:12 ha-138000 kubelet[1567]: E0815 23:31:12.861012    1567 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 15 23:31:12 ha-138000 kubelet[1567]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 15 23:31:12 ha-138000 kubelet[1567]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 15 23:31:12 ha-138000 kubelet[1567]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 15 23:31:12 ha-138000 kubelet[1567]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 15 23:31:12 ha-138000 kubelet[1567]: I0815 23:31:12.976621    1567 scope.go:117] "RemoveContainer" containerID="e919017e14bb91f5bec7b5fdf0351f27904f841341d654e814d90d000a091f26"
	Aug 15 23:32:12 ha-138000 kubelet[1567]: E0815 23:32:12.862060    1567 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 15 23:32:12 ha-138000 kubelet[1567]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 15 23:32:12 ha-138000 kubelet[1567]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 15 23:32:12 ha-138000 kubelet[1567]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 15 23:32:12 ha-138000 kubelet[1567]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-138000 -n ha-138000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-138000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (177.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (4.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:413: expected profile "ha-138000" in json of 'profile list' to have "Degraded" status but have "HAppy" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-138000\",\"Status\":\"HAppy\",\"Config\":{\"Name\":\"ha-138000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperkit\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-138000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.169.0.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.169.0.5\",\"Port\":8443,\"Kubern
etesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.169.0.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.169.0.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.169.0.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":fal
se,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\
"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-138000 -n ha-138000
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-138000 logs -n 25: (3.249682981s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterClusterRestart logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-138000 cp ha-138000-m03:/home/docker/cp-test.txt                                                                          | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m04:/home/docker/cp-test_ha-138000-m03_ha-138000-m04.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n                                                                                                             | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n ha-138000-m04 sudo cat                                                                                      | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | /home/docker/cp-test_ha-138000-m03_ha-138000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-138000 cp testdata/cp-test.txt                                                                                            | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n                                                                                                             | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-138000 cp ha-138000-m04:/home/docker/cp-test.txt                                                                          | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile1119105544/001/cp-test_ha-138000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n                                                                                                             | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-138000 cp ha-138000-m04:/home/docker/cp-test.txt                                                                          | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000:/home/docker/cp-test_ha-138000-m04_ha-138000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n                                                                                                             | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n ha-138000 sudo cat                                                                                          | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | /home/docker/cp-test_ha-138000-m04_ha-138000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-138000 cp ha-138000-m04:/home/docker/cp-test.txt                                                                          | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m02:/home/docker/cp-test_ha-138000-m04_ha-138000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n                                                                                                             | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n ha-138000-m02 sudo cat                                                                                      | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | /home/docker/cp-test_ha-138000-m04_ha-138000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-138000 cp ha-138000-m04:/home/docker/cp-test.txt                                                                          | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m03:/home/docker/cp-test_ha-138000-m04_ha-138000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n                                                                                                             | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n ha-138000-m03 sudo cat                                                                                      | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | /home/docker/cp-test_ha-138000-m04_ha-138000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-138000 node stop m02 -v=7                                                                                                 | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-138000 node start m02 -v=7                                                                                                | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:24 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-138000 -v=7                                                                                                       | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:24 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | -p ha-138000 -v=7                                                                                                            | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:24 PDT | 15 Aug 24 16:24 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-138000 --wait=true -v=7                                                                                                | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:24 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-138000                                                                                                            | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:26 PDT |                     |
	| node    | ha-138000 node delete m03 -v=7                                                                                               | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:26 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | ha-138000 stop -v=7                                                                                                          | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:27 PDT | 15 Aug 24 16:29 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-138000 --wait=true                                                                                                     | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:29 PDT | 15 Aug 24 16:32 PDT |
	|         | -v=7 --alsologtostderr                                                                                                       |           |         |         |                     |                     |
	|         | --driver=hyperkit                                                                                                            |           |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 16:29:54
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 16:29:54.033682    3848 out.go:345] Setting OutFile to fd 1 ...
	I0815 16:29:54.033848    3848 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:29:54.033854    3848 out.go:358] Setting ErrFile to fd 2...
	I0815 16:29:54.033858    3848 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:29:54.034027    3848 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-977/.minikube/bin
	I0815 16:29:54.035457    3848 out.go:352] Setting JSON to false
	I0815 16:29:54.058003    3848 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":1765,"bootTime":1723762829,"procs":428,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0815 16:29:54.058095    3848 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 16:29:54.080014    3848 out.go:177] * [ha-138000] minikube v1.33.1 on Darwin 14.6.1
	I0815 16:29:54.122634    3848 out.go:177]   - MINIKUBE_LOCATION=19452
	I0815 16:29:54.122696    3848 notify.go:220] Checking for updates...
	I0815 16:29:54.164406    3848 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19452-977/kubeconfig
	I0815 16:29:54.185700    3848 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0815 16:29:54.206554    3848 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 16:29:54.227614    3848 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-977/.minikube
	I0815 16:29:54.248519    3848 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 16:29:54.270441    3848 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:29:54.271125    3848 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:29:54.271225    3848 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:29:54.280836    3848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52223
	I0815 16:29:54.281188    3848 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:29:54.281595    3848 main.go:141] libmachine: Using API Version  1
	I0815 16:29:54.281610    3848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:29:54.281823    3848 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:29:54.281934    3848 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:29:54.282121    3848 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 16:29:54.282360    3848 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:29:54.282379    3848 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:29:54.290749    3848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52225
	I0815 16:29:54.291068    3848 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:29:54.291384    3848 main.go:141] libmachine: Using API Version  1
	I0815 16:29:54.291393    3848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:29:54.291633    3848 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:29:54.291762    3848 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:29:54.320542    3848 out.go:177] * Using the hyperkit driver based on existing profile
	I0815 16:29:54.362577    3848 start.go:297] selected driver: hyperkit
	I0815 16:29:54.362603    3848 start.go:901] validating driver "hyperkit" against &{Name:ha-138000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-138000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclas
s:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 16:29:54.362832    3848 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 16:29:54.363029    3848 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 16:29:54.363230    3848 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19452-977/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0815 16:29:54.372833    3848 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0815 16:29:54.376641    3848 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:29:54.376661    3848 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0815 16:29:54.379303    3848 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 16:29:54.379340    3848 cni.go:84] Creating CNI manager for ""
	I0815 16:29:54.379348    3848 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0815 16:29:54.379445    3848 start.go:340] cluster config:
	{Name:ha-138000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-138000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-t
iller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 16:29:54.379558    3848 iso.go:125] acquiring lock: {Name:mkb93fc66b60be15dffa9eb2999a4be8d8d4d218 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 16:29:54.421457    3848 out.go:177] * Starting "ha-138000" primary control-plane node in "ha-138000" cluster
	I0815 16:29:54.442393    3848 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 16:29:54.442490    3848 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19452-977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0815 16:29:54.442517    3848 cache.go:56] Caching tarball of preloaded images
	I0815 16:29:54.442747    3848 preload.go:172] Found /Users/jenkins/minikube-integration/19452-977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0815 16:29:54.442766    3848 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 16:29:54.442942    3848 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/config.json ...
	I0815 16:29:54.443891    3848 start.go:360] acquireMachinesLock for ha-138000: {Name:mkac8034c8a60617271f2754a03dcaf74fc4fef7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 16:29:54.444072    3848 start.go:364] duration metric: took 141.088µs to acquireMachinesLock for "ha-138000"
	I0815 16:29:54.444120    3848 start.go:96] Skipping create...Using existing machine configuration
	I0815 16:29:54.444137    3848 fix.go:54] fixHost starting: 
	I0815 16:29:54.444553    3848 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:29:54.444588    3848 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:29:54.453701    3848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52227
	I0815 16:29:54.454060    3848 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:29:54.454408    3848 main.go:141] libmachine: Using API Version  1
	I0815 16:29:54.454428    3848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:29:54.454668    3848 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:29:54.454795    3848 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:29:54.454900    3848 main.go:141] libmachine: (ha-138000) Calling .GetState
	I0815 16:29:54.455015    3848 main.go:141] libmachine: (ha-138000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:29:54.455069    3848 main.go:141] libmachine: (ha-138000) DBG | hyperkit pid from json: 3662
	I0815 16:29:54.455998    3848 main.go:141] libmachine: (ha-138000) DBG | hyperkit pid 3662 missing from process table
	I0815 16:29:54.456024    3848 fix.go:112] recreateIfNeeded on ha-138000: state=Stopped err=<nil>
	I0815 16:29:54.456037    3848 main.go:141] libmachine: (ha-138000) Calling .DriverName
	W0815 16:29:54.456128    3848 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 16:29:54.477408    3848 out.go:177] * Restarting existing hyperkit VM for "ha-138000" ...
	I0815 16:29:54.498281    3848 main.go:141] libmachine: (ha-138000) Calling .Start
	I0815 16:29:54.498449    3848 main.go:141] libmachine: (ha-138000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:29:54.498522    3848 main.go:141] libmachine: (ha-138000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/hyperkit.pid
	I0815 16:29:54.498549    3848 main.go:141] libmachine: (ha-138000) DBG | Using UUID bf1b12d0-37a9-4c04-a028-0dd0a6dcd337
	I0815 16:29:54.612230    3848 main.go:141] libmachine: (ha-138000) DBG | Generated MAC 66:4d:cd:54:35:15
	I0815 16:29:54.612256    3848 main.go:141] libmachine: (ha-138000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000
	I0815 16:29:54.612403    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:54 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"bf1b12d0-37a9-4c04-a028-0dd0a6dcd337", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002a9530)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0815 16:29:54.612447    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:54 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"bf1b12d0-37a9-4c04-a028-0dd0a6dcd337", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002a9530)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0815 16:29:54.612479    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:54 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "bf1b12d0-37a9-4c04-a028-0dd0a6dcd337", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/ha-138000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/tty,log=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/bzimage,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/initrd,earlyprintk=serial l
oglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000"}
	I0815 16:29:54.612534    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:54 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U bf1b12d0-37a9-4c04-a028-0dd0a6dcd337 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/ha-138000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/tty,log=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/console-ring -f kexec,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/bzimage,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset noresto
re waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000"
	I0815 16:29:54.612554    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:54 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0815 16:29:54.613954    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:54 DEBUG: hyperkit: Pid is 3862
	I0815 16:29:54.614352    3848 main.go:141] libmachine: (ha-138000) DBG | Attempt 0
	I0815 16:29:54.614367    3848 main.go:141] libmachine: (ha-138000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:29:54.614458    3848 main.go:141] libmachine: (ha-138000) DBG | hyperkit pid from json: 3862
	I0815 16:29:54.615668    3848 main.go:141] libmachine: (ha-138000) DBG | Searching for 66:4d:cd:54:35:15 in /var/db/dhcpd_leases ...
	I0815 16:29:54.615762    3848 main.go:141] libmachine: (ha-138000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0815 16:29:54.615788    3848 main.go:141] libmachine: (ha-138000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66be8f71}
	I0815 16:29:54.615808    3848 main.go:141] libmachine: (ha-138000) DBG | Found match: 66:4d:cd:54:35:15
	I0815 16:29:54.615836    3848 main.go:141] libmachine: (ha-138000) DBG | IP: 192.169.0.5
	I0815 16:29:54.615932    3848 main.go:141] libmachine: (ha-138000) Calling .GetConfigRaw
	I0815 16:29:54.616670    3848 main.go:141] libmachine: (ha-138000) Calling .GetIP
	I0815 16:29:54.616859    3848 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/config.json ...
	I0815 16:29:54.617254    3848 machine.go:93] provisionDockerMachine start ...
	I0815 16:29:54.617264    3848 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:29:54.617414    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:29:54.617528    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:29:54.617607    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:29:54.617679    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:29:54.617801    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:29:54.617967    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:29:54.618192    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0815 16:29:54.618201    3848 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 16:29:54.621800    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:54 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0815 16:29:54.673574    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:54 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0815 16:29:54.674258    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0815 16:29:54.674277    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0815 16:29:54.674284    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0815 16:29:54.674293    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0815 16:29:55.057707    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:55 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0815 16:29:55.057723    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:55 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0815 16:29:55.172245    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:55 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0815 16:29:55.172277    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:55 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0815 16:29:55.172313    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:55 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0815 16:29:55.172333    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:55 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0815 16:29:55.173142    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:55 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0815 16:29:55.173153    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:55 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0815 16:30:00.749814    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:30:00 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0815 16:30:00.749867    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:30:00 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0815 16:30:00.749877    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:30:00 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0815 16:30:00.774690    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:30:00 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0815 16:30:05.697072    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 16:30:05.697084    3848 main.go:141] libmachine: (ha-138000) Calling .GetMachineName
	I0815 16:30:05.697230    3848 buildroot.go:166] provisioning hostname "ha-138000"
	I0815 16:30:05.697241    3848 main.go:141] libmachine: (ha-138000) Calling .GetMachineName
	I0815 16:30:05.697340    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:30:05.697431    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:30:05.697531    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:05.697615    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:05.697729    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:30:05.697864    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:30:05.698023    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0815 16:30:05.698032    3848 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-138000 && echo "ha-138000" | sudo tee /etc/hostname
	I0815 16:30:05.773271    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-138000
	
	I0815 16:30:05.773290    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:30:05.773430    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:30:05.773543    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:05.773660    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:05.773777    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:30:05.773935    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:30:05.774084    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0815 16:30:05.774095    3848 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-138000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-138000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-138000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 16:30:05.843913    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 16:30:05.843933    3848 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19452-977/.minikube CaCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19452-977/.minikube}
	I0815 16:30:05.843947    3848 buildroot.go:174] setting up certificates
	I0815 16:30:05.843955    3848 provision.go:84] configureAuth start
	I0815 16:30:05.843962    3848 main.go:141] libmachine: (ha-138000) Calling .GetMachineName
	I0815 16:30:05.844101    3848 main.go:141] libmachine: (ha-138000) Calling .GetIP
	I0815 16:30:05.844215    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:30:05.844315    3848 provision.go:143] copyHostCerts
	I0815 16:30:05.844350    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem
	I0815 16:30:05.844436    3848 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem, removing ...
	I0815 16:30:05.844445    3848 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem
	I0815 16:30:05.844633    3848 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem (1082 bytes)
	I0815 16:30:05.844853    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem
	I0815 16:30:05.844900    3848 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem, removing ...
	I0815 16:30:05.844906    3848 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem
	I0815 16:30:05.844989    3848 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem (1123 bytes)
	I0815 16:30:05.845165    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem
	I0815 16:30:05.845202    3848 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem, removing ...
	I0815 16:30:05.845207    3848 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem
	I0815 16:30:05.845283    3848 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem (1679 bytes)
	I0815 16:30:05.845432    3848 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem org=jenkins.ha-138000 san=[127.0.0.1 192.169.0.5 ha-138000 localhost minikube]
	I0815 16:30:06.272971    3848 provision.go:177] copyRemoteCerts
	I0815 16:30:06.273031    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 16:30:06.273048    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:30:06.273185    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:30:06.273289    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:06.273389    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:30:06.273476    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/id_rsa Username:docker}
	I0815 16:30:06.313671    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0815 16:30:06.313804    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 16:30:06.335207    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0815 16:30:06.335264    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0815 16:30:06.355028    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0815 16:30:06.355085    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 16:30:06.374691    3848 provision.go:87] duration metric: took 530.722569ms to configureAuth
	I0815 16:30:06.374705    3848 buildroot.go:189] setting minikube options for container-runtime
	I0815 16:30:06.374882    3848 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:30:06.374898    3848 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:30:06.375031    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:30:06.375135    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:30:06.375215    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:06.375302    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:06.375381    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:30:06.375501    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:30:06.375633    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0815 16:30:06.375641    3848 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0815 16:30:06.439797    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0815 16:30:06.439813    3848 buildroot.go:70] root file system type: tmpfs
	I0815 16:30:06.439885    3848 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0815 16:30:06.439896    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:30:06.440029    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:30:06.440119    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:06.440211    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:06.440322    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:30:06.440461    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:30:06.440594    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0815 16:30:06.440647    3848 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0815 16:30:06.516125    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0815 16:30:06.516150    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:30:06.516294    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:30:06.516408    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:06.516493    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:06.516594    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:30:06.516721    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:30:06.516850    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0815 16:30:06.516863    3848 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0815 16:30:08.163546    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0815 16:30:08.163562    3848 machine.go:96] duration metric: took 13.546346493s to provisionDockerMachine
	I0815 16:30:08.163573    3848 start.go:293] postStartSetup for "ha-138000" (driver="hyperkit")
	I0815 16:30:08.163581    3848 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 16:30:08.163591    3848 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:30:08.163828    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 16:30:08.163844    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:30:08.163938    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:30:08.164036    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:08.164139    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:30:08.164243    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/id_rsa Username:docker}
	I0815 16:30:08.204020    3848 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 16:30:08.207179    3848 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 16:30:08.207192    3848 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19452-977/.minikube/addons for local assets ...
	I0815 16:30:08.207302    3848 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19452-977/.minikube/files for local assets ...
	I0815 16:30:08.207487    3848 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> 14982.pem in /etc/ssl/certs
	I0815 16:30:08.207494    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> /etc/ssl/certs/14982.pem
	I0815 16:30:08.207699    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 16:30:08.215716    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem --> /etc/ssl/certs/14982.pem (1708 bytes)
	I0815 16:30:08.234526    3848 start.go:296] duration metric: took 70.944461ms for postStartSetup
	I0815 16:30:08.234554    3848 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:30:08.234725    3848 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0815 16:30:08.234737    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:30:08.234828    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:30:08.234919    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:08.235004    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:30:08.235082    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/id_rsa Username:docker}
	I0815 16:30:08.273169    3848 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0815 16:30:08.273225    3848 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0815 16:30:08.324608    3848 fix.go:56] duration metric: took 13.880521363s for fixHost
	I0815 16:30:08.324634    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:30:08.324763    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:30:08.324864    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:08.324958    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:08.325046    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:30:08.325174    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:30:08.325312    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0815 16:30:08.325319    3848 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 16:30:08.390142    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723764608.424079213
	
	I0815 16:30:08.390153    3848 fix.go:216] guest clock: 1723764608.424079213
	I0815 16:30:08.390158    3848 fix.go:229] Guest: 2024-08-15 16:30:08.424079213 -0700 PDT Remote: 2024-08-15 16:30:08.324621 -0700 PDT m=+14.326357489 (delta=99.458213ms)
	I0815 16:30:08.390181    3848 fix.go:200] guest clock delta is within tolerance: 99.458213ms
	I0815 16:30:08.390185    3848 start.go:83] releasing machines lock for "ha-138000", held for 13.946148575s
	I0815 16:30:08.390205    3848 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:30:08.390341    3848 main.go:141] libmachine: (ha-138000) Calling .GetIP
	I0815 16:30:08.390446    3848 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:30:08.390809    3848 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:30:08.390921    3848 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:30:08.390989    3848 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 16:30:08.391019    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:30:08.391075    3848 ssh_runner.go:195] Run: cat /version.json
	I0815 16:30:08.391087    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:30:08.391112    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:30:08.391203    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:30:08.391220    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:08.391315    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:30:08.391333    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:08.391411    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:30:08.391426    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/id_rsa Username:docker}
	I0815 16:30:08.391513    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/id_rsa Username:docker}
	I0815 16:30:08.423504    3848 ssh_runner.go:195] Run: systemctl --version
	I0815 16:30:08.428371    3848 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 16:30:08.479207    3848 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 16:30:08.479307    3848 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 16:30:08.492318    3848 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 16:30:08.492331    3848 start.go:495] detecting cgroup driver to use...
	I0815 16:30:08.492428    3848 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 16:30:08.510522    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0815 16:30:08.519382    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0815 16:30:08.528348    3848 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0815 16:30:08.528399    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0815 16:30:08.537505    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 16:30:08.546478    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0815 16:30:08.555462    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 16:30:08.564389    3848 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 16:30:08.573622    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0815 16:30:08.582698    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0815 16:30:08.591735    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0815 16:30:08.600760    3848 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 16:30:08.609049    3848 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 16:30:08.617235    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:30:08.722765    3848 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0815 16:30:08.746033    3848 start.go:495] detecting cgroup driver to use...
	I0815 16:30:08.746116    3848 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0815 16:30:08.759830    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 16:30:08.771599    3848 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 16:30:08.789529    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 16:30:08.802787    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0815 16:30:08.815377    3848 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0815 16:30:08.844257    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0815 16:30:08.860249    3848 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 16:30:08.875283    3848 ssh_runner.go:195] Run: which cri-dockerd
	I0815 16:30:08.878327    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0815 16:30:08.886411    3848 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0815 16:30:08.899899    3848 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0815 16:30:09.005084    3848 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0815 16:30:09.128876    3848 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0815 16:30:09.128948    3848 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0815 16:30:09.143602    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:30:09.247986    3848 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0815 16:30:11.515907    3848 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.267909782s)
	I0815 16:30:11.515971    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0815 16:30:11.526125    3848 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0815 16:30:11.539600    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0815 16:30:11.550726    3848 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0815 16:30:11.659005    3848 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0815 16:30:11.764312    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:30:11.871322    3848 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0815 16:30:11.884643    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0815 16:30:11.896838    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:30:12.002912    3848 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0815 16:30:12.062997    3848 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0815 16:30:12.063089    3848 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0815 16:30:12.067549    3848 start.go:563] Will wait 60s for crictl version
	I0815 16:30:12.067596    3848 ssh_runner.go:195] Run: which crictl
	I0815 16:30:12.070446    3848 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 16:30:12.096434    3848 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0815 16:30:12.096513    3848 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0815 16:30:12.116037    3848 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0815 16:30:12.178340    3848 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0815 16:30:12.178421    3848 main.go:141] libmachine: (ha-138000) Calling .GetIP
	I0815 16:30:12.178824    3848 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0815 16:30:12.183375    3848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 16:30:12.193025    3848 kubeadm.go:883] updating cluster {Name:ha-138000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:ha-138000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false f
reshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 16:30:12.193108    3848 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 16:30:12.193158    3848 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0815 16:30:12.206441    3848 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0815 16:30:12.206452    3848 docker.go:615] Images already preloaded, skipping extraction
	I0815 16:30:12.206519    3848 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0815 16:30:12.219546    3848 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0815 16:30:12.219565    3848 cache_images.go:84] Images are preloaded, skipping loading
	I0815 16:30:12.219576    3848 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.0 docker true true} ...
	I0815 16:30:12.219652    3848 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-138000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-138000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 16:30:12.219721    3848 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0815 16:30:12.258519    3848 cni.go:84] Creating CNI manager for ""
	I0815 16:30:12.258529    3848 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0815 16:30:12.258542    3848 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 16:30:12.258557    3848 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-138000 NodeName:ha-138000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 16:30:12.258636    3848 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-138000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 16:30:12.258649    3848 kube-vip.go:115] generating kube-vip config ...
	I0815 16:30:12.258696    3848 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0815 16:30:12.271337    3848 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0815 16:30:12.271407    3848 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0815 16:30:12.271468    3848 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 16:30:12.279197    3848 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 16:30:12.279243    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0815 16:30:12.286309    3848 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0815 16:30:12.299687    3848 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 16:30:12.313389    3848 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0815 16:30:12.327846    3848 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0815 16:30:12.341535    3848 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0815 16:30:12.344364    3848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 16:30:12.353627    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:30:12.452370    3848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 16:30:12.466830    3848 certs.go:68] Setting up /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000 for IP: 192.169.0.5
	I0815 16:30:12.466842    3848 certs.go:194] generating shared ca certs ...
	I0815 16:30:12.466852    3848 certs.go:226] acquiring lock for ca certs: {Name:mka1c88019f6064d2983ac988db71a67aeb65696 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:30:12.467038    3848 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.key
	I0815 16:30:12.467111    3848 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.key
	I0815 16:30:12.467121    3848 certs.go:256] generating profile certs ...
	I0815 16:30:12.467229    3848 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/client.key
	I0815 16:30:12.467304    3848 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.key.7af4c91a
	I0815 16:30:12.467369    3848 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.key
	I0815 16:30:12.467377    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0815 16:30:12.467397    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0815 16:30:12.467414    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0815 16:30:12.467432    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0815 16:30:12.467450    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0815 16:30:12.467479    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0815 16:30:12.467508    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0815 16:30:12.467527    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0815 16:30:12.467627    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498.pem (1338 bytes)
	W0815 16:30:12.467674    3848 certs.go:480] ignoring /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498_empty.pem, impossibly tiny 0 bytes
	I0815 16:30:12.467683    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 16:30:12.467721    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem (1082 bytes)
	I0815 16:30:12.467762    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem (1123 bytes)
	I0815 16:30:12.467793    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem (1679 bytes)
	I0815 16:30:12.467866    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem (1708 bytes)
	I0815 16:30:12.467898    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:30:12.467918    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498.pem -> /usr/share/ca-certificates/1498.pem
	I0815 16:30:12.467935    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> /usr/share/ca-certificates/14982.pem
	I0815 16:30:12.468350    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 16:30:12.503573    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 16:30:12.529609    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 16:30:12.555283    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 16:30:12.583638    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0815 16:30:12.612822    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 16:30:12.658082    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 16:30:12.709731    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 16:30:12.747480    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 16:30:12.797444    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498.pem --> /usr/share/ca-certificates/1498.pem (1338 bytes)
	I0815 16:30:12.830947    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem --> /usr/share/ca-certificates/14982.pem (1708 bytes)
	I0815 16:30:12.850811    3848 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 16:30:12.864245    3848 ssh_runner.go:195] Run: openssl version
	I0815 16:30:12.868404    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 16:30:12.876802    3848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:30:12.880151    3848 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:05 /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:30:12.880186    3848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:30:12.884283    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 16:30:12.892538    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1498.pem && ln -fs /usr/share/ca-certificates/1498.pem /etc/ssl/certs/1498.pem"
	I0815 16:30:12.900652    3848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1498.pem
	I0815 16:30:12.904017    3848 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 23:14 /usr/share/ca-certificates/1498.pem
	I0815 16:30:12.904050    3848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1498.pem
	I0815 16:30:12.908285    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1498.pem /etc/ssl/certs/51391683.0"
	I0815 16:30:12.916567    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14982.pem && ln -fs /usr/share/ca-certificates/14982.pem /etc/ssl/certs/14982.pem"
	I0815 16:30:12.924847    3848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14982.pem
	I0815 16:30:12.928159    3848 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 23:14 /usr/share/ca-certificates/14982.pem
	I0815 16:30:12.928193    3848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14982.pem
	I0815 16:30:12.932352    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14982.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 16:30:12.940679    3848 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 16:30:12.943953    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 16:30:12.948281    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 16:30:12.952498    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 16:30:12.956859    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 16:30:12.961066    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 16:30:12.965237    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 16:30:12.969424    3848 kubeadm.go:392] StartCluster: {Name:ha-138000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:ha-138000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 16:30:12.969537    3848 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0815 16:30:12.983217    3848 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 16:30:12.990985    3848 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 16:30:12.990998    3848 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 16:30:12.991037    3848 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 16:30:12.998611    3848 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 16:30:12.998906    3848 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-138000" does not appear in /Users/jenkins/minikube-integration/19452-977/kubeconfig
	I0815 16:30:12.998990    3848 kubeconfig.go:62] /Users/jenkins/minikube-integration/19452-977/kubeconfig needs updating (will repair): [kubeconfig missing "ha-138000" cluster setting kubeconfig missing "ha-138000" context setting]
	I0815 16:30:12.999150    3848 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-977/kubeconfig: {Name:mk0f04b0199f84dc20eb294d5b790451b41e43fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:30:12.999761    3848 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19452-977/kubeconfig
	I0815 16:30:12.999936    3848 kapi.go:59] client config for ha-138000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/client.key", CAFile:"/Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Use
rAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x3a6cf60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0815 16:30:13.000222    3848 cert_rotation.go:140] Starting client certificate rotation controller
	I0815 16:30:13.000394    3848 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 16:30:13.007927    3848 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0815 16:30:13.007944    3848 kubeadm.go:597] duration metric: took 16.941718ms to restartPrimaryControlPlane
	I0815 16:30:13.007950    3848 kubeadm.go:394] duration metric: took 38.534887ms to StartCluster
	I0815 16:30:13.007960    3848 settings.go:142] acquiring lock: {Name:mk694dad19d37394fa6b13c51a7dc54b62e97c12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:30:13.008036    3848 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19452-977/kubeconfig
	I0815 16:30:13.008396    3848 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-977/kubeconfig: {Name:mk0f04b0199f84dc20eb294d5b790451b41e43fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:30:13.008625    3848 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 16:30:13.008644    3848 start.go:241] waiting for startup goroutines ...
	I0815 16:30:13.008652    3848 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 16:30:13.008752    3848 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:30:13.052465    3848 out.go:177] * Enabled addons: 
	I0815 16:30:13.073695    3848 addons.go:510] duration metric: took 65.048594ms for enable addons: enabled=[]
	I0815 16:30:13.073733    3848 start.go:246] waiting for cluster config update ...
	I0815 16:30:13.073745    3848 start.go:255] writing updated cluster config ...
	I0815 16:30:13.095512    3848 out.go:201] 
	I0815 16:30:13.116951    3848 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:30:13.117068    3848 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/config.json ...
	I0815 16:30:13.139649    3848 out.go:177] * Starting "ha-138000-m02" control-plane node in "ha-138000" cluster
	I0815 16:30:13.181551    3848 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 16:30:13.181610    3848 cache.go:56] Caching tarball of preloaded images
	I0815 16:30:13.181807    3848 preload.go:172] Found /Users/jenkins/minikube-integration/19452-977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0815 16:30:13.181826    3848 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 16:30:13.181935    3848 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/config.json ...
	I0815 16:30:13.182895    3848 start.go:360] acquireMachinesLock for ha-138000-m02: {Name:mkac8034c8a60617271f2754a03dcaf74fc4fef7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 16:30:13.183018    3848 start.go:364] duration metric: took 98.069µs to acquireMachinesLock for "ha-138000-m02"
	I0815 16:30:13.183044    3848 start.go:96] Skipping create...Using existing machine configuration
	I0815 16:30:13.183051    3848 fix.go:54] fixHost starting: m02
	I0815 16:30:13.183444    3848 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:30:13.183470    3848 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:30:13.192973    3848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52251
	I0815 16:30:13.193340    3848 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:30:13.193664    3848 main.go:141] libmachine: Using API Version  1
	I0815 16:30:13.193677    3848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:30:13.193949    3848 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:30:13.194068    3848 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:30:13.194158    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetState
	I0815 16:30:13.194250    3848 main.go:141] libmachine: (ha-138000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:30:13.194330    3848 main.go:141] libmachine: (ha-138000-m02) DBG | hyperkit pid from json: 3670
	I0815 16:30:13.195266    3848 main.go:141] libmachine: (ha-138000-m02) DBG | hyperkit pid 3670 missing from process table
	I0815 16:30:13.195300    3848 fix.go:112] recreateIfNeeded on ha-138000-m02: state=Stopped err=<nil>
	I0815 16:30:13.195308    3848 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	W0815 16:30:13.195387    3848 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 16:30:13.216598    3848 out.go:177] * Restarting existing hyperkit VM for "ha-138000-m02" ...
	I0815 16:30:13.258591    3848 main.go:141] libmachine: (ha-138000-m02) Calling .Start
	I0815 16:30:13.258850    3848 main.go:141] libmachine: (ha-138000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:30:13.258951    3848 main.go:141] libmachine: (ha-138000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/hyperkit.pid
	I0815 16:30:13.260726    3848 main.go:141] libmachine: (ha-138000-m02) DBG | hyperkit pid 3670 missing from process table
	I0815 16:30:13.260746    3848 main.go:141] libmachine: (ha-138000-m02) DBG | pid 3670 is in state "Stopped"
	I0815 16:30:13.260762    3848 main.go:141] libmachine: (ha-138000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/hyperkit.pid...
	I0815 16:30:13.261090    3848 main.go:141] libmachine: (ha-138000-m02) DBG | Using UUID 4cff9b5a-9fe3-4215-9139-05f05b79bce3
	I0815 16:30:13.290755    3848 main.go:141] libmachine: (ha-138000-m02) DBG | Generated MAC 9a:c2:e9:d7:1c:58
	I0815 16:30:13.290775    3848 main.go:141] libmachine: (ha-138000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000
	I0815 16:30:13.290894    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"4cff9b5a-9fe3-4215-9139-05f05b79bce3", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003beae0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0815 16:30:13.290919    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"4cff9b5a-9fe3-4215-9139-05f05b79bce3", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003beae0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0815 16:30:13.290973    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "4cff9b5a-9fe3-4215-9139-05f05b79bce3", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/ha-138000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/tty,log=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/bzimage,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-13
8000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000"}
	I0815 16:30:13.291003    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 4cff9b5a-9fe3-4215-9139-05f05b79bce3 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/ha-138000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/tty,log=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/bzimage,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 co
nsole=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000"
	I0815 16:30:13.291039    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0815 16:30:13.292431    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 DEBUG: hyperkit: Pid is 4167
	I0815 16:30:13.292922    3848 main.go:141] libmachine: (ha-138000-m02) DBG | Attempt 0
	I0815 16:30:13.292931    3848 main.go:141] libmachine: (ha-138000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:30:13.292988    3848 main.go:141] libmachine: (ha-138000-m02) DBG | hyperkit pid from json: 4167
	I0815 16:30:13.294816    3848 main.go:141] libmachine: (ha-138000-m02) DBG | Searching for 9a:c2:e9:d7:1c:58 in /var/db/dhcpd_leases ...
	I0815 16:30:13.294866    3848 main.go:141] libmachine: (ha-138000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0815 16:30:13.294889    3848 main.go:141] libmachine: (ha-138000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:30:13.294903    3848 main.go:141] libmachine: (ha-138000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfdfcb}
	I0815 16:30:13.294915    3848 main.go:141] libmachine: (ha-138000-m02) DBG | Found match: 9a:c2:e9:d7:1c:58
	I0815 16:30:13.294931    3848 main.go:141] libmachine: (ha-138000-m02) DBG | IP: 192.169.0.6
	I0815 16:30:13.294997    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetConfigRaw
	I0815 16:30:13.295728    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetIP
	I0815 16:30:13.295920    3848 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/config.json ...
	I0815 16:30:13.296384    3848 machine.go:93] provisionDockerMachine start ...
	I0815 16:30:13.296394    3848 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:30:13.296516    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:30:13.296606    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:30:13.296695    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:13.296801    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:13.296905    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:30:13.297071    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:30:13.297242    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0815 16:30:13.297249    3848 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 16:30:13.300476    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0815 16:30:13.310276    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0815 16:30:13.311421    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0815 16:30:13.311448    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0815 16:30:13.311463    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0815 16:30:13.311475    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0815 16:30:13.698130    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0815 16:30:13.698145    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0815 16:30:13.812764    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0815 16:30:13.812785    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0815 16:30:13.812794    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0815 16:30:13.812888    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0815 16:30:13.813620    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0815 16:30:13.813637    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0815 16:30:19.405369    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:19 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0815 16:30:19.405428    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:19 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0815 16:30:19.405441    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:19 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0815 16:30:19.429063    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:19 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0815 16:30:24.364782    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 16:30:24.364794    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetMachineName
	I0815 16:30:24.364947    3848 buildroot.go:166] provisioning hostname "ha-138000-m02"
	I0815 16:30:24.364958    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetMachineName
	I0815 16:30:24.365057    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:30:24.365147    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:30:24.365238    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:24.365323    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:24.365453    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:30:24.365589    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:30:24.365741    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0815 16:30:24.365749    3848 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-138000-m02 && echo "ha-138000-m02" | sudo tee /etc/hostname
	I0815 16:30:24.435748    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-138000-m02
	
	I0815 16:30:24.435762    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:30:24.435893    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:30:24.435990    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:24.436082    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:24.436186    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:30:24.436313    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:30:24.436463    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0815 16:30:24.436475    3848 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-138000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-138000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-138000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 16:30:24.504475    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 16:30:24.504492    3848 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19452-977/.minikube CaCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19452-977/.minikube}
	I0815 16:30:24.504503    3848 buildroot.go:174] setting up certificates
	I0815 16:30:24.504519    3848 provision.go:84] configureAuth start
	I0815 16:30:24.504526    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetMachineName
	I0815 16:30:24.504663    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetIP
	I0815 16:30:24.504758    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:30:24.504846    3848 provision.go:143] copyHostCerts
	I0815 16:30:24.504877    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem
	I0815 16:30:24.504929    3848 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem, removing ...
	I0815 16:30:24.504935    3848 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem
	I0815 16:30:24.505124    3848 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem (1082 bytes)
	I0815 16:30:24.505339    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem
	I0815 16:30:24.505371    3848 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem, removing ...
	I0815 16:30:24.505375    3848 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem
	I0815 16:30:24.505446    3848 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem (1123 bytes)
	I0815 16:30:24.505596    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem
	I0815 16:30:24.505624    3848 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem, removing ...
	I0815 16:30:24.505628    3848 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem
	I0815 16:30:24.505696    3848 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem (1679 bytes)
	I0815 16:30:24.505845    3848 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem org=jenkins.ha-138000-m02 san=[127.0.0.1 192.169.0.6 ha-138000-m02 localhost minikube]
	I0815 16:30:24.669808    3848 provision.go:177] copyRemoteCerts
	I0815 16:30:24.669859    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 16:30:24.669875    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:30:24.670016    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:30:24.670138    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:24.670247    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:30:24.670341    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/id_rsa Username:docker}
	I0815 16:30:24.707125    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0815 16:30:24.707202    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 16:30:24.726013    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0815 16:30:24.726070    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0815 16:30:24.745370    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0815 16:30:24.745429    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 16:30:24.765407    3848 provision.go:87] duration metric: took 260.879651ms to configureAuth
	I0815 16:30:24.765419    3848 buildroot.go:189] setting minikube options for container-runtime
	I0815 16:30:24.765586    3848 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:30:24.765614    3848 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:30:24.765750    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:30:24.765841    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:30:24.765917    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:24.765992    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:24.766073    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:30:24.766180    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:30:24.766348    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0815 16:30:24.766356    3848 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0815 16:30:24.825444    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0815 16:30:24.825455    3848 buildroot.go:70] root file system type: tmpfs
	I0815 16:30:24.825535    3848 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0815 16:30:24.825546    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:30:24.825668    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:30:24.825761    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:24.825848    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:24.825931    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:30:24.826067    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:30:24.826205    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0815 16:30:24.826249    3848 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0815 16:30:24.894944    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0815 16:30:24.894961    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:30:24.895099    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:30:24.895204    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:24.895287    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:24.895382    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:30:24.895505    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:30:24.895640    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0815 16:30:24.895652    3848 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0815 16:30:26.552071    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0815 16:30:26.552086    3848 machine.go:96] duration metric: took 13.255738864s to provisionDockerMachine
	I0815 16:30:26.552093    3848 start.go:293] postStartSetup for "ha-138000-m02" (driver="hyperkit")
	I0815 16:30:26.552100    3848 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 16:30:26.552110    3848 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:30:26.552311    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 16:30:26.552326    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:30:26.552426    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:30:26.552517    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:26.552610    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:30:26.552712    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/id_rsa Username:docker}
	I0815 16:30:26.593353    3848 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 16:30:26.598425    3848 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 16:30:26.598438    3848 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19452-977/.minikube/addons for local assets ...
	I0815 16:30:26.598548    3848 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19452-977/.minikube/files for local assets ...
	I0815 16:30:26.598699    3848 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> 14982.pem in /etc/ssl/certs
	I0815 16:30:26.598705    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> /etc/ssl/certs/14982.pem
	I0815 16:30:26.598861    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 16:30:26.610066    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem --> /etc/ssl/certs/14982.pem (1708 bytes)
	I0815 16:30:26.645456    3848 start.go:296] duration metric: took 93.354607ms for postStartSetup
	I0815 16:30:26.645497    3848 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:30:26.645674    3848 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0815 16:30:26.645688    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:30:26.645776    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:30:26.645850    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:26.645933    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:30:26.646015    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/id_rsa Username:docker}
	I0815 16:30:26.683361    3848 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0815 16:30:26.683423    3848 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0815 16:30:26.737495    3848 fix.go:56] duration metric: took 13.554488062s for fixHost
	I0815 16:30:26.737525    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:30:26.737661    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:30:26.737749    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:26.737848    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:26.737943    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:30:26.738080    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:30:26.738216    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0815 16:30:26.738224    3848 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 16:30:26.796943    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723764627.049155775
	
	I0815 16:30:26.796953    3848 fix.go:216] guest clock: 1723764627.049155775
	I0815 16:30:26.796959    3848 fix.go:229] Guest: 2024-08-15 16:30:27.049155775 -0700 PDT Remote: 2024-08-15 16:30:26.737509 -0700 PDT m=+32.739307986 (delta=311.646775ms)
	I0815 16:30:26.796973    3848 fix.go:200] guest clock delta is within tolerance: 311.646775ms
	I0815 16:30:26.796977    3848 start.go:83] releasing machines lock for "ha-138000-m02", held for 13.613993837s
	I0815 16:30:26.796994    3848 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:30:26.797121    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetIP
	I0815 16:30:26.821561    3848 out.go:177] * Found network options:
	I0815 16:30:26.841357    3848 out.go:177]   - NO_PROXY=192.169.0.5
	W0815 16:30:26.862556    3848 proxy.go:119] fail to check proxy env: Error ip not in block
	I0815 16:30:26.862605    3848 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:30:26.863433    3848 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:30:26.863671    3848 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:30:26.863815    3848 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 16:30:26.863856    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	W0815 16:30:26.863902    3848 proxy.go:119] fail to check proxy env: Error ip not in block
	I0815 16:30:26.863997    3848 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0815 16:30:26.864019    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:30:26.864116    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:30:26.864226    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:30:26.864284    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:26.864479    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:30:26.864535    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:26.864691    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/id_rsa Username:docker}
	I0815 16:30:26.864752    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:30:26.864886    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/id_rsa Username:docker}
	W0815 16:30:26.897510    3848 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 16:30:26.897576    3848 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 16:30:26.944949    3848 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 16:30:26.944964    3848 start.go:495] detecting cgroup driver to use...
	I0815 16:30:26.945031    3848 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 16:30:26.959965    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0815 16:30:26.969052    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0815 16:30:26.977789    3848 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0815 16:30:26.977840    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0815 16:30:26.986870    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 16:30:26.995871    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0815 16:30:27.004811    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 16:30:27.013722    3848 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 16:30:27.022692    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0815 16:30:27.031569    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0815 16:30:27.040462    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0815 16:30:27.049386    3848 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 16:30:27.057419    3848 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 16:30:27.065508    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:30:27.164154    3848 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0815 16:30:27.181165    3848 start.go:495] detecting cgroup driver to use...
	I0815 16:30:27.181250    3848 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0815 16:30:27.192595    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 16:30:27.203037    3848 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 16:30:27.216573    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 16:30:27.228211    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0815 16:30:27.239268    3848 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0815 16:30:27.258656    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0815 16:30:27.269954    3848 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 16:30:27.284667    3848 ssh_runner.go:195] Run: which cri-dockerd
	I0815 16:30:27.287552    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0815 16:30:27.295653    3848 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0815 16:30:27.309091    3848 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0815 16:30:27.403676    3848 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0815 16:30:27.500434    3848 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0815 16:30:27.500464    3848 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0815 16:30:27.514754    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:30:27.610670    3848 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0815 16:30:29.951174    3848 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.340492876s)
	I0815 16:30:29.951241    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0815 16:30:29.961656    3848 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0815 16:30:29.974207    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0815 16:30:29.984718    3848 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0815 16:30:30.078933    3848 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0815 16:30:30.191991    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:30:30.301187    3848 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0815 16:30:30.314601    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0815 16:30:30.325440    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:30:30.420867    3848 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0815 16:30:30.486340    3848 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0815 16:30:30.486435    3848 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0815 16:30:30.491068    3848 start.go:563] Will wait 60s for crictl version
	I0815 16:30:30.491127    3848 ssh_runner.go:195] Run: which crictl
	I0815 16:30:30.494150    3848 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 16:30:30.523583    3848 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0815 16:30:30.523658    3848 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0815 16:30:30.541608    3848 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0815 16:30:30.598613    3848 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0815 16:30:30.658061    3848 out.go:177]   - env NO_PROXY=192.169.0.5
	I0815 16:30:30.695353    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetIP
	I0815 16:30:30.695714    3848 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0815 16:30:30.700361    3848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 16:30:30.709893    3848 mustload.go:65] Loading cluster: ha-138000
	I0815 16:30:30.710062    3848 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:30:30.710316    3848 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:30:30.710336    3848 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:30:30.719005    3848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52273
	I0815 16:30:30.719360    3848 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:30:30.719741    3848 main.go:141] libmachine: Using API Version  1
	I0815 16:30:30.719750    3848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:30:30.719981    3848 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:30:30.720103    3848 main.go:141] libmachine: (ha-138000) Calling .GetState
	I0815 16:30:30.720187    3848 main.go:141] libmachine: (ha-138000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:30:30.720267    3848 main.go:141] libmachine: (ha-138000) DBG | hyperkit pid from json: 3862
	I0815 16:30:30.721211    3848 host.go:66] Checking if "ha-138000" exists ...
	I0815 16:30:30.721471    3848 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:30:30.721491    3848 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:30:30.729999    3848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52275
	I0815 16:30:30.730336    3848 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:30:30.730678    3848 main.go:141] libmachine: Using API Version  1
	I0815 16:30:30.730693    3848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:30:30.730926    3848 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:30:30.731056    3848 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:30:30.731175    3848 certs.go:68] Setting up /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000 for IP: 192.169.0.6
	I0815 16:30:30.731181    3848 certs.go:194] generating shared ca certs ...
	I0815 16:30:30.731197    3848 certs.go:226] acquiring lock for ca certs: {Name:mka1c88019f6064d2983ac988db71a67aeb65696 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:30:30.731336    3848 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.key
	I0815 16:30:30.731387    3848 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.key
	I0815 16:30:30.731396    3848 certs.go:256] generating profile certs ...
	I0815 16:30:30.731509    3848 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/client.key
	I0815 16:30:30.731595    3848 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.key.5f0053a1
	I0815 16:30:30.731651    3848 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.key
	I0815 16:30:30.731658    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0815 16:30:30.731679    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0815 16:30:30.731700    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0815 16:30:30.731722    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0815 16:30:30.731740    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0815 16:30:30.731768    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0815 16:30:30.731791    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0815 16:30:30.731809    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0815 16:30:30.731883    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498.pem (1338 bytes)
	W0815 16:30:30.731920    3848 certs.go:480] ignoring /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498_empty.pem, impossibly tiny 0 bytes
	I0815 16:30:30.731928    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 16:30:30.731973    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem (1082 bytes)
	I0815 16:30:30.732017    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem (1123 bytes)
	I0815 16:30:30.732045    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem (1679 bytes)
	I0815 16:30:30.732121    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem (1708 bytes)
	I0815 16:30:30.732157    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> /usr/share/ca-certificates/14982.pem
	I0815 16:30:30.732177    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:30:30.732194    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498.pem -> /usr/share/ca-certificates/1498.pem
	I0815 16:30:30.732219    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:30:30.732316    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:30:30.732406    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:30.732529    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:30:30.732609    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/id_rsa Username:docker}
	I0815 16:30:30.763783    3848 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0815 16:30:30.767449    3848 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0815 16:30:30.776129    3848 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0815 16:30:30.779163    3848 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0815 16:30:30.787730    3848 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0815 16:30:30.791082    3848 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0815 16:30:30.799754    3848 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0815 16:30:30.802809    3848 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0815 16:30:30.811618    3848 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0815 16:30:30.814650    3848 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0815 16:30:30.822963    3848 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0815 16:30:30.826004    3848 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0815 16:30:30.834906    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 16:30:30.854912    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 16:30:30.874577    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 16:30:30.894388    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 16:30:30.914413    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0815 16:30:30.933887    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 16:30:30.953772    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 16:30:30.973419    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 16:30:30.992862    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem --> /usr/share/ca-certificates/14982.pem (1708 bytes)
	I0815 16:30:31.012391    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 16:30:31.031916    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498.pem --> /usr/share/ca-certificates/1498.pem (1338 bytes)
	I0815 16:30:31.051694    3848 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0815 16:30:31.065167    3848 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0815 16:30:31.078573    3848 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0815 16:30:31.091997    3848 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0815 16:30:31.105622    3848 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0815 16:30:31.119143    3848 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0815 16:30:31.132670    3848 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0815 16:30:31.146406    3848 ssh_runner.go:195] Run: openssl version
	I0815 16:30:31.150444    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 16:30:31.158651    3848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:30:31.162017    3848 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:05 /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:30:31.162055    3848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:30:31.166191    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 16:30:31.174561    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1498.pem && ln -fs /usr/share/ca-certificates/1498.pem /etc/ssl/certs/1498.pem"
	I0815 16:30:31.182745    3848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1498.pem
	I0815 16:30:31.186223    3848 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 23:14 /usr/share/ca-certificates/1498.pem
	I0815 16:30:31.186262    3848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1498.pem
	I0815 16:30:31.190437    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1498.pem /etc/ssl/certs/51391683.0"
	I0815 16:30:31.198642    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14982.pem && ln -fs /usr/share/ca-certificates/14982.pem /etc/ssl/certs/14982.pem"
	I0815 16:30:31.207129    3848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14982.pem
	I0815 16:30:31.210527    3848 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 23:14 /usr/share/ca-certificates/14982.pem
	I0815 16:30:31.210565    3848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14982.pem
	I0815 16:30:31.214780    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14982.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 16:30:31.223055    3848 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 16:30:31.226404    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 16:30:31.230624    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 16:30:31.234964    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 16:30:31.239281    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 16:30:31.243508    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 16:30:31.247740    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 16:30:31.251885    3848 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.31.0 docker true true} ...
	I0815 16:30:31.251948    3848 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-138000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-138000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 16:30:31.251968    3848 kube-vip.go:115] generating kube-vip config ...
	I0815 16:30:31.251997    3848 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0815 16:30:31.264157    3848 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0815 16:30:31.264200    3848 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0815 16:30:31.264247    3848 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 16:30:31.272799    3848 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 16:30:31.272844    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0815 16:30:31.280999    3848 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0815 16:30:31.294195    3848 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 16:30:31.307421    3848 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0815 16:30:31.321201    3848 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0815 16:30:31.324137    3848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 16:30:31.334188    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:30:31.429450    3848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 16:30:31.443961    3848 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 16:30:31.444161    3848 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:30:31.465375    3848 out.go:177] * Verifying Kubernetes components...
	I0815 16:30:31.507025    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:30:31.625968    3848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 16:30:31.645410    3848 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19452-977/kubeconfig
	I0815 16:30:31.645610    3848 kapi.go:59] client config for ha-138000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/client.key", CAFile:"/Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, U
serAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x3a6cf60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0815 16:30:31.645648    3848 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0815 16:30:31.645835    3848 node_ready.go:35] waiting up to 6m0s for node "ha-138000-m02" to be "Ready" ...
	I0815 16:30:31.645920    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:31.645925    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:31.645933    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:31.645936    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:40.053028    3848 round_trippers.go:574] Response Status: 200 OK in 8407 milliseconds
	I0815 16:30:40.053934    3848 node_ready.go:49] node "ha-138000-m02" has status "Ready":"True"
	I0815 16:30:40.053949    3848 node_ready.go:38] duration metric: took 8.408123647s for node "ha-138000-m02" to be "Ready" ...
	I0815 16:30:40.053959    3848 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 16:30:40.053997    3848 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0815 16:30:40.054008    3848 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0815 16:30:40.054051    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0815 16:30:40.054057    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:40.054064    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:40.054066    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:40.076049    3848 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0815 16:30:40.083485    3848 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-dmgt5" in "kube-system" namespace to be "Ready" ...
	I0815 16:30:40.083552    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-dmgt5
	I0815 16:30:40.083559    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:40.083565    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:40.083569    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:40.090478    3848 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0815 16:30:40.091010    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:40.091019    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:40.091025    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:40.091028    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:40.094713    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:40.095017    3848 pod_ready.go:93] pod "coredns-6f6b679f8f-dmgt5" in "kube-system" namespace has status "Ready":"True"
	I0815 16:30:40.095031    3848 pod_ready.go:82] duration metric: took 11.52447ms for pod "coredns-6f6b679f8f-dmgt5" in "kube-system" namespace to be "Ready" ...
	I0815 16:30:40.095040    3848 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-zc8jj" in "kube-system" namespace to be "Ready" ...
	I0815 16:30:40.095087    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-zc8jj
	I0815 16:30:40.095094    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:40.095102    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:40.095107    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:40.101746    3848 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0815 16:30:40.102483    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:40.102492    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:40.102500    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:40.102503    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:40.105983    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:40.106569    3848 pod_ready.go:93] pod "coredns-6f6b679f8f-zc8jj" in "kube-system" namespace has status "Ready":"True"
	I0815 16:30:40.106587    3848 pod_ready.go:82] duration metric: took 11.533246ms for pod "coredns-6f6b679f8f-zc8jj" in "kube-system" namespace to be "Ready" ...
	I0815 16:30:40.106595    3848 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:30:40.106638    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000
	I0815 16:30:40.106644    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:40.106651    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:40.106654    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:40.110887    3848 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 16:30:40.111881    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:40.111893    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:40.111902    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:40.111907    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:40.114794    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:40.115181    3848 pod_ready.go:93] pod "etcd-ha-138000" in "kube-system" namespace has status "Ready":"True"
	I0815 16:30:40.115194    3848 pod_ready.go:82] duration metric: took 8.594007ms for pod "etcd-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:30:40.115201    3848 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:30:40.115242    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m02
	I0815 16:30:40.115247    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:40.115252    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:40.115256    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:40.121257    3848 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 16:30:40.121684    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:40.121694    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:40.121704    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:40.121710    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:40.125990    3848 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 16:30:40.126507    3848 pod_ready.go:93] pod "etcd-ha-138000-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 16:30:40.126520    3848 pod_ready.go:82] duration metric: took 11.312949ms for pod "etcd-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:30:40.126528    3848 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:30:40.126573    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:30:40.126579    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:40.126585    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:40.126589    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:40.129916    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:40.254208    3848 request.go:632] Waited for 123.846339ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:30:40.254247    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:30:40.254252    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:40.254262    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:40.254299    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:40.258157    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:40.258510    3848 pod_ready.go:93] pod "etcd-ha-138000-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 16:30:40.258520    3848 pod_ready.go:82] duration metric: took 131.98589ms for pod "etcd-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:30:40.258532    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:30:40.454350    3848 request.go:632] Waited for 195.778452ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000
	I0815 16:30:40.454424    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000
	I0815 16:30:40.454430    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:40.454436    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:40.454441    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:40.457270    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:40.654210    3848 request.go:632] Waited for 196.49648ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:40.654247    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:40.654254    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:40.654300    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:40.654306    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:40.662420    3848 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0815 16:30:40.662780    3848 pod_ready.go:98] node "ha-138000" hosting pod "kube-apiserver-ha-138000" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-138000" has status "Ready":"False"
	I0815 16:30:40.662798    3848 pod_ready.go:82] duration metric: took 404.260054ms for pod "kube-apiserver-ha-138000" in "kube-system" namespace to be "Ready" ...
	E0815 16:30:40.662809    3848 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-138000" hosting pod "kube-apiserver-ha-138000" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-138000" has status "Ready":"False"
	I0815 16:30:40.662819    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:30:40.854147    3848 request.go:632] Waited for 191.277341ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:40.854226    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:40.854232    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:40.854238    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:40.854243    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:40.859631    3848 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 16:30:41.054463    3848 request.go:632] Waited for 194.266573ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:41.054497    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:41.054501    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:41.054509    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:41.054513    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:41.058210    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:41.254872    3848 request.go:632] Waited for 91.867207ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:41.254917    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:41.254966    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:41.254978    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:41.254982    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:41.258343    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:41.455877    3848 request.go:632] Waited for 196.977249ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:41.455912    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:41.455919    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:41.455925    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:41.455931    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:41.457855    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:41.664056    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:41.664082    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:41.664093    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:41.664100    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:41.667876    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:41.854208    3848 request.go:632] Waited for 185.493412ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:41.854247    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:41.854253    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:41.854260    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:41.854264    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:41.856823    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:42.163578    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:42.163664    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:42.163680    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:42.163716    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:42.167135    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:42.254205    3848 request.go:632] Waited for 86.267935ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:42.254261    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:42.254269    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:42.254286    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:42.254324    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:42.257709    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:42.664326    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:42.664344    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:42.664353    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:42.664357    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:42.666960    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:42.667548    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:42.667555    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:42.667561    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:42.667564    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:42.669222    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:42.669539    3848 pod_ready.go:103] pod "kube-apiserver-ha-138000-m02" in "kube-system" namespace has status "Ready":"False"
	I0815 16:30:43.163236    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:43.163273    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:43.163281    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:43.163286    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:43.165588    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:43.166081    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:43.166088    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:43.166094    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:43.166097    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:43.167727    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:43.663181    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:43.663266    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:43.663274    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:43.663277    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:43.665851    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:43.666288    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:43.666295    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:43.666301    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:43.666305    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:43.669495    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:44.163768    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:44.163782    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:44.163788    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:44.163800    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:44.166284    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:44.166820    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:44.166828    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:44.166834    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:44.166853    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:44.169173    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:44.663006    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:44.663018    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:44.663023    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:44.663025    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:44.665460    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:44.666145    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:44.666152    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:44.666158    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:44.666162    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:44.668246    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:45.164214    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:45.164237    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:45.164314    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:45.164325    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:45.167819    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:45.168514    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:45.168521    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:45.168528    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:45.168531    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:45.170434    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:45.170836    3848 pod_ready.go:103] pod "kube-apiserver-ha-138000-m02" in "kube-system" namespace has status "Ready":"False"
	I0815 16:30:45.665030    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:45.665056    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:45.665068    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:45.665073    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:45.668540    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:45.669128    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:45.669139    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:45.669148    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:45.669152    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:45.671055    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:46.163033    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:46.163095    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:46.163108    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:46.163116    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:46.166371    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:46.166786    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:46.166793    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:46.166799    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:46.166803    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:46.168600    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:46.663767    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:46.663791    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:46.663803    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:46.663814    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:46.667030    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:46.667614    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:46.667625    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:46.667633    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:46.667637    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:46.669233    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:47.163455    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:47.163469    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:47.163475    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:47.163480    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:47.167195    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:47.167557    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:47.167565    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:47.167571    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:47.167576    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:47.170814    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:47.171266    3848 pod_ready.go:103] pod "kube-apiserver-ha-138000-m02" in "kube-system" namespace has status "Ready":"False"
	I0815 16:30:47.663794    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:47.663820    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:47.663831    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:47.663839    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:47.667639    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:47.668283    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:47.668291    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:47.668297    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:47.668301    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:47.669950    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:48.164538    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:48.164559    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:48.164581    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:48.164603    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:48.168530    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:48.169233    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:48.169241    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:48.169248    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:48.169251    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:48.171274    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:48.663780    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:48.663804    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:48.663815    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:48.663821    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:48.667278    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:48.667837    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:48.667845    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:48.667851    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:48.667856    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:48.669518    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:49.165064    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:49.165087    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:49.165098    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:49.165104    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:49.168508    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:49.169206    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:49.169217    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:49.169225    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:49.169230    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:49.171198    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:49.171795    3848 pod_ready.go:103] pod "kube-apiserver-ha-138000-m02" in "kube-system" namespace has status "Ready":"False"
	I0815 16:30:49.663424    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:49.663448    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:49.663459    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:49.663467    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:49.667225    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:49.667697    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:49.667705    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:49.667711    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:49.667714    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:49.669376    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:50.164125    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:50.164149    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:50.164161    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:50.164166    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:50.167285    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:50.167810    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:50.167817    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:50.167823    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:50.167827    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:50.171799    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:50.663500    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:50.663525    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:50.663537    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:50.663543    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:50.667177    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:50.667713    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:50.667720    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:50.667726    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:50.667730    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:50.669352    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:51.164194    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:51.164219    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:51.164237    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:51.164244    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:51.167593    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:51.168246    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:51.168257    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:51.168264    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:51.168270    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:51.170524    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:51.664614    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:51.664638    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:51.664657    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:51.664665    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:51.668046    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:51.668566    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:51.668577    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:51.668585    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:51.668607    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:51.671534    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:51.671914    3848 pod_ready.go:103] pod "kube-apiserver-ha-138000-m02" in "kube-system" namespace has status "Ready":"False"
	I0815 16:30:52.164065    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:52.164089    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:52.164101    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:52.164110    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:52.167433    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:52.167935    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:52.167943    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:52.167948    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:52.167952    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:52.169540    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:52.169859    3848 pod_ready.go:93] pod "kube-apiserver-ha-138000-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 16:30:52.169869    3848 pod_ready.go:82] duration metric: took 11.507082407s for pod "kube-apiserver-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:30:52.169876    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:30:52.169910    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m03
	I0815 16:30:52.169915    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:52.169920    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:52.169923    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:52.171715    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:52.172141    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:30:52.172148    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:52.172154    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:52.172158    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:52.173532    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:52.173854    3848 pod_ready.go:93] pod "kube-apiserver-ha-138000-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 16:30:52.173863    3848 pod_ready.go:82] duration metric: took 3.981675ms for pod "kube-apiserver-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:30:52.173872    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:30:52.173900    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:30:52.173905    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:52.173911    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:52.173915    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:52.175518    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:52.175919    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:52.175926    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:52.175932    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:52.175936    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:52.177444    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:52.675197    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:30:52.675270    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:52.675284    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:52.675316    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:52.678186    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:52.678703    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:52.678711    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:52.678716    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:52.678719    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:52.680216    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:53.174971    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:30:53.174985    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:53.174994    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:53.175001    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:53.177452    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:53.177896    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:53.177903    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:53.177909    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:53.177912    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:53.179480    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:53.674788    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:30:53.674799    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:53.674806    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:53.674809    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:53.676873    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:53.677297    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:53.677305    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:53.677311    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:53.677315    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:53.678908    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:54.175897    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:30:54.175920    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:54.175937    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:54.175942    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:54.180021    3848 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 16:30:54.180479    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:54.180486    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:54.180492    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:54.180495    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:54.182351    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:54.182698    3848 pod_ready.go:103] pod "kube-controller-manager-ha-138000" in "kube-system" namespace has status "Ready":"False"
	I0815 16:30:54.674099    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:30:54.674113    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:54.674122    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:54.674126    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:54.676508    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:54.676959    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:54.676967    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:54.676973    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:54.676977    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:54.678531    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:55.174102    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:30:55.174117    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:55.174124    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:55.174129    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:55.176616    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:55.176978    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:55.176985    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:55.176991    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:55.176995    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:55.178804    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:55.675041    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:30:55.675073    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:55.675080    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:55.675083    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:55.677155    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:55.677606    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:55.677614    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:55.677620    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:55.677623    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:55.679257    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:56.174332    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:30:56.174347    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:56.174355    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:56.174360    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:56.176768    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:56.177182    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:56.177189    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:56.177194    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:56.177199    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:56.178739    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:56.674623    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:30:56.674644    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:56.674656    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:56.674663    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:56.678017    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:56.678729    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:56.678740    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:56.678748    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:56.678753    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:56.680396    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:56.680664    3848 pod_ready.go:103] pod "kube-controller-manager-ha-138000" in "kube-system" namespace has status "Ready":"False"
	I0815 16:30:57.174239    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:30:57.174259    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:57.174270    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:57.174276    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:57.176913    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:57.177317    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:57.177325    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:57.177330    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:57.177333    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:57.179089    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:57.674639    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:30:57.674650    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:57.674656    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:57.674660    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:57.676502    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:57.676984    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:57.676992    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:57.676997    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:57.677001    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:57.678477    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:58.174097    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:30:58.174117    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:58.174128    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:58.174136    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:58.177182    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:58.177563    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:58.177571    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:58.177575    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:58.177579    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:58.179304    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:58.675031    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:30:58.675045    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:58.675051    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:58.675055    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:58.680738    3848 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 16:30:58.682155    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:58.682163    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:58.682168    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:58.682171    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:58.686617    3848 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 16:30:58.686985    3848 pod_ready.go:103] pod "kube-controller-manager-ha-138000" in "kube-system" namespace has status "Ready":"False"
	I0815 16:30:59.174980    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:30:59.175006    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:59.175018    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:59.175023    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:59.178731    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:59.179314    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:59.179322    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:59.179328    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:59.179332    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:59.181206    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:59.674657    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:30:59.674670    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:59.674676    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:59.674679    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:59.676675    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:59.677055    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:59.677062    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:59.677069    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:59.677074    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:59.679271    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:00.174152    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:00.174175    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:00.174187    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:00.174194    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:00.177768    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:00.178234    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:00.178241    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:00.178247    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:00.178251    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:00.179906    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:00.675229    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:00.675240    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:00.675246    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:00.675250    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:00.677503    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:00.677966    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:00.677974    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:00.677979    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:00.677983    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:00.681462    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:01.174237    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:01.174258    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:01.174271    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:01.174278    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:01.177221    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:01.177958    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:01.177967    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:01.177973    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:01.177987    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:01.179870    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:01.180167    3848 pod_ready.go:103] pod "kube-controller-manager-ha-138000" in "kube-system" namespace has status "Ready":"False"
	I0815 16:31:01.674059    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:01.674071    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:01.674078    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:01.674082    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:01.678596    3848 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 16:31:01.679166    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:01.679175    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:01.679183    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:01.679203    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:01.681866    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:02.174721    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:02.174744    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:02.174757    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:02.174765    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:02.177936    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:02.178578    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:02.178585    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:02.178590    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:02.178593    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:02.180199    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:02.674480    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:02.674492    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:02.674498    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:02.674501    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:02.676574    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:02.677121    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:02.677129    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:02.677135    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:02.677138    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:02.678870    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:03.174993    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:03.175017    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:03.175028    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:03.175034    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:03.178103    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:03.178765    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:03.178773    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:03.178780    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:03.178783    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:03.180384    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:03.180717    3848 pod_ready.go:103] pod "kube-controller-manager-ha-138000" in "kube-system" namespace has status "Ready":"False"
	I0815 16:31:03.675885    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:03.675928    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:03.675935    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:03.675938    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:03.681610    3848 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 16:31:03.682165    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:03.682172    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:03.682178    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:03.682187    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:03.685681    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:04.173973    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:04.173985    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:04.173993    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:04.173996    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:04.176170    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:04.176622    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:04.176629    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:04.176635    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:04.176638    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:04.178918    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:04.674029    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:04.674041    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:04.674047    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:04.674051    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:04.676085    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:04.676616    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:04.676624    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:04.676629    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:04.676633    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:04.678653    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:05.174670    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:05.174682    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:05.174692    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:05.174696    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:05.176894    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:05.177444    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:05.177452    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:05.177458    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:05.177462    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:05.179988    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:05.673967    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:05.673984    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:05.673991    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:05.674005    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:05.676133    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:05.676616    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:05.676623    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:05.676629    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:05.676632    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:05.678220    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:05.678588    3848 pod_ready.go:103] pod "kube-controller-manager-ha-138000" in "kube-system" namespace has status "Ready":"False"
	I0815 16:31:06.174028    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:06.174040    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:06.174046    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:06.174049    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:06.176193    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:06.176556    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:06.176564    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:06.176570    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:06.176574    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:06.178240    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:06.674003    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:06.674018    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:06.674028    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:06.674032    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:06.676638    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:06.677110    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:06.677118    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:06.677124    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:06.677127    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:06.680025    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:07.175462    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:07.175477    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:07.175485    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:07.175489    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:07.178337    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:07.178886    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:07.178895    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:07.178900    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:07.178904    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:07.181117    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:07.674103    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:07.674115    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:07.674121    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:07.674125    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:07.676375    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:07.676766    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:07.676774    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:07.676780    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:07.676783    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:07.678622    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:07.678897    3848 pod_ready.go:103] pod "kube-controller-manager-ha-138000" in "kube-system" namespace has status "Ready":"False"
	I0815 16:31:08.174128    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:08.174151    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:08.174166    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:08.174203    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:08.177482    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:08.177896    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:08.177904    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:08.177909    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:08.177914    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:08.179348    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:08.674105    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:08.674132    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:08.674180    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:08.674191    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:08.677562    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:08.677981    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:08.677989    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:08.677994    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:08.677997    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:08.679564    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:09.174687    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:09.174712    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:09.174723    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:09.174728    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:09.177711    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:09.178141    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:09.178149    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:09.178155    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:09.178160    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:09.179715    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:09.675793    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:09.675810    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:09.675860    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:09.675867    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:09.681370    3848 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 16:31:09.681707    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:09.681714    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:09.681720    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:09.681724    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:09.683407    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:09.683668    3848 pod_ready.go:103] pod "kube-controller-manager-ha-138000" in "kube-system" namespace has status "Ready":"False"
	I0815 16:31:10.174082    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:10.174096    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:10.174104    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:10.174111    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:10.176432    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:10.176901    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:10.176909    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:10.176916    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:10.176919    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:10.178547    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:10.674143    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:10.674158    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:10.674166    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:10.674171    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:10.676827    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:10.677366    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:10.677374    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:10.677379    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:10.677398    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:10.679369    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:11.174015    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:11.174031    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:11.174039    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:11.174043    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:11.176194    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:11.176646    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:11.176655    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:11.176661    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:11.176664    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:11.178182    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:11.674088    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:11.674100    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:11.674107    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:11.674111    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:11.676722    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:11.677179    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:11.677186    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:11.677192    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:11.677197    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:11.679318    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:12.173967    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:12.173978    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:12.173983    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:12.173986    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:12.176395    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:12.176784    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:12.176792    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:12.176797    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:12.176799    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:12.178613    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:12.178965    3848 pod_ready.go:103] pod "kube-controller-manager-ha-138000" in "kube-system" namespace has status "Ready":"False"
	I0815 16:31:12.674752    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:12.674764    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:12.674771    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:12.674774    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:12.676796    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:12.677237    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:12.677244    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:12.677249    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:12.677254    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:12.678824    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:13.174235    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:13.174257    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:13.174269    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:13.174275    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:13.177507    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:13.177937    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:13.177945    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:13.177950    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:13.177958    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:13.179998    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:13.674842    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:13.674865    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:13.674920    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:13.674927    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:13.677347    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:13.677743    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:13.677750    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:13.677756    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:13.677760    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:13.679598    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:14.174511    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:14.174531    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:14.174543    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:14.174548    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:14.177242    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:14.177787    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:14.177794    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:14.177799    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:14.177804    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:14.179505    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:14.179846    3848 pod_ready.go:103] pod "kube-controller-manager-ha-138000" in "kube-system" namespace has status "Ready":"False"
	I0815 16:31:14.674978    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:14.674991    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:14.675000    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:14.675005    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:14.677126    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:14.677577    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:14.677584    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:14.677589    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:14.677592    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:14.679150    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:15.174111    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:15.174190    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:15.174206    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:15.174214    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:15.178180    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:15.178702    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:15.178709    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:15.178716    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:15.178720    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:15.180563    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:15.674161    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:15.674175    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:15.674181    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:15.674184    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:15.676320    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:15.676809    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:15.676817    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:15.676822    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:15.676826    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:15.678731    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:15.679179    3848 pod_ready.go:93] pod "kube-controller-manager-ha-138000" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:15.679188    3848 pod_ready.go:82] duration metric: took 23.505390371s for pod "kube-controller-manager-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:15.679194    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:15.679234    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000-m02
	I0815 16:31:15.679239    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:15.679244    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:15.679249    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:15.680973    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:15.681373    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:31:15.681379    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:15.681385    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:15.681389    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:15.683105    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:15.683478    3848 pod_ready.go:93] pod "kube-controller-manager-ha-138000-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:15.683487    3848 pod_ready.go:82] duration metric: took 4.286435ms for pod "kube-controller-manager-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:15.683493    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:15.683528    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000-m03
	I0815 16:31:15.683532    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:15.683538    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:15.683543    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:15.685040    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:15.685461    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:15.685469    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:15.685474    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:15.685478    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:15.687218    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:15.687628    3848 pod_ready.go:93] pod "kube-controller-manager-ha-138000-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:15.687636    3848 pod_ready.go:82] duration metric: took 4.137303ms for pod "kube-controller-manager-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:15.687642    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cznkn" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:15.687674    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cznkn
	I0815 16:31:15.687679    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:15.687685    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:15.687690    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:15.689397    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:15.689764    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:15.689771    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:15.689776    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:15.689787    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:15.691449    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:15.691750    3848 pod_ready.go:93] pod "kube-proxy-cznkn" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:15.691759    3848 pod_ready.go:82] duration metric: took 4.111581ms for pod "kube-proxy-cznkn" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:15.691765    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kxghx" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:15.691804    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kxghx
	I0815 16:31:15.691809    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:15.691815    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:15.691819    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:15.693452    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:15.693908    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:15.693915    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:15.693921    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:15.693924    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:15.695674    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:15.695946    3848 pod_ready.go:93] pod "kube-proxy-kxghx" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:15.695955    3848 pod_ready.go:82] duration metric: took 4.185821ms for pod "kube-proxy-kxghx" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:15.695961    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qpth7" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:15.875071    3848 request.go:632] Waited for 179.069493ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qpth7
	I0815 16:31:15.875187    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qpth7
	I0815 16:31:15.875199    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:15.875210    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:15.875216    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:15.877997    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:16.074238    3848 request.go:632] Waited for 195.764515ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m04
	I0815 16:31:16.074336    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m04
	I0815 16:31:16.074348    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:16.074360    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:16.074366    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:16.076828    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:16.077164    3848 pod_ready.go:93] pod "kube-proxy-qpth7" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:16.077173    3848 pod_ready.go:82] duration metric: took 381.20933ms for pod "kube-proxy-qpth7" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:16.077180    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tf79g" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:16.275150    3848 request.go:632] Waited for 197.922377ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tf79g
	I0815 16:31:16.275315    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tf79g
	I0815 16:31:16.275333    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:16.275348    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:16.275355    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:16.279230    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:16.474637    3848 request.go:632] Waited for 194.734989ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:31:16.474686    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:31:16.474694    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:16.474748    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:16.474760    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:16.477402    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:16.477913    3848 pod_ready.go:93] pod "kube-proxy-tf79g" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:16.477922    3848 pod_ready.go:82] duration metric: took 400.738709ms for pod "kube-proxy-tf79g" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:16.477928    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:16.674642    3848 request.go:632] Waited for 196.671207ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000
	I0815 16:31:16.674730    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000
	I0815 16:31:16.674740    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:16.674751    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:16.674791    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:16.677902    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:16.874216    3848 request.go:632] Waited for 195.903155ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:16.874296    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:16.874307    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:16.874318    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:16.874325    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:16.877076    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:16.877354    3848 pod_ready.go:93] pod "kube-scheduler-ha-138000" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:16.877362    3848 pod_ready.go:82] duration metric: took 399.431009ms for pod "kube-scheduler-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:16.877369    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:17.075600    3848 request.go:632] Waited for 198.191772ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000-m02
	I0815 16:31:17.075685    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000-m02
	I0815 16:31:17.075692    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:17.075697    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:17.075701    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:17.077601    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:17.275453    3848 request.go:632] Waited for 196.87369ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:31:17.275508    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:31:17.275516    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:17.275528    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:17.275536    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:17.278217    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:17.278748    3848 pod_ready.go:93] pod "kube-scheduler-ha-138000-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:17.278761    3848 pod_ready.go:82] duration metric: took 401.387065ms for pod "kube-scheduler-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:17.278778    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:17.474217    3848 request.go:632] Waited for 195.389302ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000-m03
	I0815 16:31:17.474330    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000-m03
	I0815 16:31:17.474342    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:17.474353    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:17.474361    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:17.477689    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:17.675623    3848 request.go:632] Waited for 197.469909ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:17.675688    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:17.675697    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:17.675705    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:17.675712    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:17.677994    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:17.678325    3848 pod_ready.go:93] pod "kube-scheduler-ha-138000-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:17.678335    3848 pod_ready.go:82] duration metric: took 399.551961ms for pod "kube-scheduler-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:17.678343    3848 pod_ready.go:39] duration metric: took 37.624501402s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 16:31:17.678361    3848 api_server.go:52] waiting for apiserver process to appear ...
	I0815 16:31:17.678422    3848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 16:31:17.692897    3848 api_server.go:72] duration metric: took 46.249064527s to wait for apiserver process to appear ...
	I0815 16:31:17.692911    3848 api_server.go:88] waiting for apiserver healthz status ...
	I0815 16:31:17.692928    3848 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0815 16:31:17.695957    3848 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0815 16:31:17.695990    3848 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0815 16:31:17.695994    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:17.696000    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:17.696004    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:17.696581    3848 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0815 16:31:17.696664    3848 api_server.go:141] control plane version: v1.31.0
	I0815 16:31:17.696676    3848 api_server.go:131] duration metric: took 3.760735ms to wait for apiserver health ...
	I0815 16:31:17.696684    3848 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 16:31:17.874475    3848 request.go:632] Waited for 177.745811ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0815 16:31:17.874542    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0815 16:31:17.874551    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:17.874608    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:17.874617    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:17.879453    3848 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 16:31:17.884757    3848 system_pods.go:59] 26 kube-system pods found
	I0815 16:31:17.884772    3848 system_pods.go:61] "coredns-6f6b679f8f-dmgt5" [47d73953-ec2c-4f17-b2b8-d6a9b5e5a316] Running
	I0815 16:31:17.884778    3848 system_pods.go:61] "coredns-6f6b679f8f-zc8jj" [b4a9df39-b09d-4bc3-97f6-b3176ff8e842] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 16:31:17.884783    3848 system_pods.go:61] "etcd-ha-138000" [c2e7e508-a157-4581-a446-2f48e51c0c16] Running
	I0815 16:31:17.884787    3848 system_pods.go:61] "etcd-ha-138000-m02" [4f371d3f-821b-4eae-ab52-5b75d90acd67] Running
	I0815 16:31:17.884791    3848 system_pods.go:61] "etcd-ha-138000-m03" [ebdd90b3-c3b2-44c2-a411-4f6afa67a455] Running
	I0815 16:31:17.884793    3848 system_pods.go:61] "kindnet-77dc6" [24bfc069-9446-43a7-aa61-308c1a62a20e] Running
	I0815 16:31:17.884796    3848 system_pods.go:61] "kindnet-dsvxt" [7645852b-8535-4cb4-8324-15c9b55176e3] Running
	I0815 16:31:17.884798    3848 system_pods.go:61] "kindnet-m887r" [ba31865b-c712-47a8-9fd8-06420270ac8b] Running
	I0815 16:31:17.884801    3848 system_pods.go:61] "kindnet-z6mnx" [fc6211a0-df99-4350-90ba-a18f74bd1bfc] Running
	I0815 16:31:17.884804    3848 system_pods.go:61] "kube-apiserver-ha-138000" [bbc684f7-d8e2-44f2-987a-df9d05cf54fc] Running
	I0815 16:31:17.884806    3848 system_pods.go:61] "kube-apiserver-ha-138000-m02" [c30e129b-f475-4131-a25e-7eeecb39cbea] Running
	I0815 16:31:17.884809    3848 system_pods.go:61] "kube-apiserver-ha-138000-m03" [90304f02-10f2-446d-b731-674df5602401] Running
	I0815 16:31:17.884811    3848 system_pods.go:61] "kube-controller-manager-ha-138000" [1d4148d1-9798-4662-91de-d9a7dae634e2] Running
	I0815 16:31:17.884814    3848 system_pods.go:61] "kube-controller-manager-ha-138000-m02" [ad693dae-cb0a-46ca-955b-33a9465d911a] Running
	I0815 16:31:17.884816    3848 system_pods.go:61] "kube-controller-manager-ha-138000-m03" [11aa509a-dade-4b66-b157-4d4cccb7f349] Running
	I0815 16:31:17.884819    3848 system_pods.go:61] "kube-proxy-cznkn" [61cd1a9a-80fb-4d0e-a4dd-f5ed2d6ddc0f] Running
	I0815 16:31:17.884821    3848 system_pods.go:61] "kube-proxy-kxghx" [27f61dcc-2812-4038-af2a-aa9f46033869] Running
	I0815 16:31:17.884823    3848 system_pods.go:61] "kube-proxy-qpth7" [a343f80b-0fe9-4c88-9782-5fbf9a6170d1] Running
	I0815 16:31:17.884826    3848 system_pods.go:61] "kube-proxy-tf79g" [ef8a2aeb-55c4-436e-a306-da8b93c9707f] Running
	I0815 16:31:17.884830    3848 system_pods.go:61] "kube-scheduler-ha-138000" [ee1d3eb0-596d-4bf0-bed4-dbd38b286e38] Running
	I0815 16:31:17.884832    3848 system_pods.go:61] "kube-scheduler-ha-138000-m02" [1cc29309-49ad-427e-862a-758cc5711eef] Running
	I0815 16:31:17.884835    3848 system_pods.go:61] "kube-scheduler-ha-138000-m03" [2e3efb3c-6903-4306-adec-d4a47b3a56bd] Running
	I0815 16:31:17.884837    3848 system_pods.go:61] "kube-vip-ha-138000" [1b8c66e5-ffaa-4d4b-a220-8eb664b79d91] Running
	I0815 16:31:17.884839    3848 system_pods.go:61] "kube-vip-ha-138000-m02" [a0fe2ba6-1ace-456f-ab2e-15c43303dcad] Running
	I0815 16:31:17.884841    3848 system_pods.go:61] "kube-vip-ha-138000-m03" [44fe2bd5-bd48-4f86-b38a-7e4b3554174c] Running
	I0815 16:31:17.884844    3848 system_pods.go:61] "storage-provisioner" [35f3a9d7-cb68-4b10-82a2-1bd72d8d1aa6] Running
	I0815 16:31:17.884847    3848 system_pods.go:74] duration metric: took 188.159351ms to wait for pod list to return data ...
	I0815 16:31:17.884852    3848 default_sa.go:34] waiting for default service account to be created ...
	I0815 16:31:18.074641    3848 request.go:632] Waited for 189.738485ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0815 16:31:18.074728    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0815 16:31:18.074738    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:18.074749    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:18.074756    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:18.078635    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:18.078759    3848 default_sa.go:45] found service account: "default"
	I0815 16:31:18.078768    3848 default_sa.go:55] duration metric: took 193.912663ms for default service account to be created ...
	I0815 16:31:18.078774    3848 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 16:31:18.274230    3848 request.go:632] Waited for 195.413402ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0815 16:31:18.274340    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0815 16:31:18.274351    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:18.274361    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:18.274369    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:18.279297    3848 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 16:31:18.284504    3848 system_pods.go:86] 26 kube-system pods found
	I0815 16:31:18.284515    3848 system_pods.go:89] "coredns-6f6b679f8f-dmgt5" [47d73953-ec2c-4f17-b2b8-d6a9b5e5a316] Running
	I0815 16:31:18.284521    3848 system_pods.go:89] "coredns-6f6b679f8f-zc8jj" [b4a9df39-b09d-4bc3-97f6-b3176ff8e842] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 16:31:18.284525    3848 system_pods.go:89] "etcd-ha-138000" [c2e7e508-a157-4581-a446-2f48e51c0c16] Running
	I0815 16:31:18.284530    3848 system_pods.go:89] "etcd-ha-138000-m02" [4f371d3f-821b-4eae-ab52-5b75d90acd67] Running
	I0815 16:31:18.284534    3848 system_pods.go:89] "etcd-ha-138000-m03" [ebdd90b3-c3b2-44c2-a411-4f6afa67a455] Running
	I0815 16:31:18.284537    3848 system_pods.go:89] "kindnet-77dc6" [24bfc069-9446-43a7-aa61-308c1a62a20e] Running
	I0815 16:31:18.284540    3848 system_pods.go:89] "kindnet-dsvxt" [7645852b-8535-4cb4-8324-15c9b55176e3] Running
	I0815 16:31:18.284543    3848 system_pods.go:89] "kindnet-m887r" [ba31865b-c712-47a8-9fd8-06420270ac8b] Running
	I0815 16:31:18.284545    3848 system_pods.go:89] "kindnet-z6mnx" [fc6211a0-df99-4350-90ba-a18f74bd1bfc] Running
	I0815 16:31:18.284550    3848 system_pods.go:89] "kube-apiserver-ha-138000" [bbc684f7-d8e2-44f2-987a-df9d05cf54fc] Running
	I0815 16:31:18.284554    3848 system_pods.go:89] "kube-apiserver-ha-138000-m02" [c30e129b-f475-4131-a25e-7eeecb39cbea] Running
	I0815 16:31:18.284557    3848 system_pods.go:89] "kube-apiserver-ha-138000-m03" [90304f02-10f2-446d-b731-674df5602401] Running
	I0815 16:31:18.284561    3848 system_pods.go:89] "kube-controller-manager-ha-138000" [1d4148d1-9798-4662-91de-d9a7dae634e2] Running
	I0815 16:31:18.284564    3848 system_pods.go:89] "kube-controller-manager-ha-138000-m02" [ad693dae-cb0a-46ca-955b-33a9465d911a] Running
	I0815 16:31:18.284567    3848 system_pods.go:89] "kube-controller-manager-ha-138000-m03" [11aa509a-dade-4b66-b157-4d4cccb7f349] Running
	I0815 16:31:18.284570    3848 system_pods.go:89] "kube-proxy-cznkn" [61cd1a9a-80fb-4d0e-a4dd-f5ed2d6ddc0f] Running
	I0815 16:31:18.284572    3848 system_pods.go:89] "kube-proxy-kxghx" [27f61dcc-2812-4038-af2a-aa9f46033869] Running
	I0815 16:31:18.284575    3848 system_pods.go:89] "kube-proxy-qpth7" [a343f80b-0fe9-4c88-9782-5fbf9a6170d1] Running
	I0815 16:31:18.284579    3848 system_pods.go:89] "kube-proxy-tf79g" [ef8a2aeb-55c4-436e-a306-da8b93c9707f] Running
	I0815 16:31:18.284582    3848 system_pods.go:89] "kube-scheduler-ha-138000" [ee1d3eb0-596d-4bf0-bed4-dbd38b286e38] Running
	I0815 16:31:18.284586    3848 system_pods.go:89] "kube-scheduler-ha-138000-m02" [1cc29309-49ad-427e-862a-758cc5711eef] Running
	I0815 16:31:18.284588    3848 system_pods.go:89] "kube-scheduler-ha-138000-m03" [2e3efb3c-6903-4306-adec-d4a47b3a56bd] Running
	I0815 16:31:18.284591    3848 system_pods.go:89] "kube-vip-ha-138000" [1b8c66e5-ffaa-4d4b-a220-8eb664b79d91] Running
	I0815 16:31:18.284594    3848 system_pods.go:89] "kube-vip-ha-138000-m02" [a0fe2ba6-1ace-456f-ab2e-15c43303dcad] Running
	I0815 16:31:18.284596    3848 system_pods.go:89] "kube-vip-ha-138000-m03" [44fe2bd5-bd48-4f86-b38a-7e4b3554174c] Running
	I0815 16:31:18.284599    3848 system_pods.go:89] "storage-provisioner" [35f3a9d7-cb68-4b10-82a2-1bd72d8d1aa6] Running
	I0815 16:31:18.284603    3848 system_pods.go:126] duration metric: took 205.826361ms to wait for k8s-apps to be running ...
	I0815 16:31:18.284609    3848 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 16:31:18.284679    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 16:31:18.296708    3848 system_svc.go:56] duration metric: took 12.095446ms WaitForService to wait for kubelet
	I0815 16:31:18.296724    3848 kubeadm.go:582] duration metric: took 46.852894704s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 16:31:18.296736    3848 node_conditions.go:102] verifying NodePressure condition ...
	I0815 16:31:18.474267    3848 request.go:632] Waited for 177.483283ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0815 16:31:18.474322    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0815 16:31:18.474330    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:18.474371    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:18.474392    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:18.477388    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:18.478383    3848 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 16:31:18.478396    3848 node_conditions.go:123] node cpu capacity is 2
	I0815 16:31:18.478405    3848 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 16:31:18.478408    3848 node_conditions.go:123] node cpu capacity is 2
	I0815 16:31:18.478412    3848 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 16:31:18.478415    3848 node_conditions.go:123] node cpu capacity is 2
	I0815 16:31:18.478418    3848 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 16:31:18.478423    3848 node_conditions.go:123] node cpu capacity is 2
	I0815 16:31:18.478427    3848 node_conditions.go:105] duration metric: took 181.688465ms to run NodePressure ...
	I0815 16:31:18.478434    3848 start.go:241] waiting for startup goroutines ...
	I0815 16:31:18.478453    3848 start.go:255] writing updated cluster config ...
	I0815 16:31:18.501967    3848 out.go:201] 
	I0815 16:31:18.522062    3848 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:31:18.522177    3848 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/config.json ...
	I0815 16:31:18.560022    3848 out.go:177] * Starting "ha-138000-m03" control-plane node in "ha-138000" cluster
	I0815 16:31:18.618077    3848 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 16:31:18.618104    3848 cache.go:56] Caching tarball of preloaded images
	I0815 16:31:18.618293    3848 preload.go:172] Found /Users/jenkins/minikube-integration/19452-977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0815 16:31:18.618310    3848 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 16:31:18.618409    3848 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/config.json ...
	I0815 16:31:18.619051    3848 start.go:360] acquireMachinesLock for ha-138000-m03: {Name:mkac8034c8a60617271f2754a03dcaf74fc4fef7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 16:31:18.619147    3848 start.go:364] duration metric: took 77.203µs to acquireMachinesLock for "ha-138000-m03"
	I0815 16:31:18.619166    3848 start.go:96] Skipping create...Using existing machine configuration
	I0815 16:31:18.619174    3848 fix.go:54] fixHost starting: m03
	I0815 16:31:18.619485    3848 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:31:18.619510    3848 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:31:18.628416    3848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52280
	I0815 16:31:18.628739    3848 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:31:18.629076    3848 main.go:141] libmachine: Using API Version  1
	I0815 16:31:18.629087    3848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:31:18.629285    3848 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:31:18.629412    3848 main.go:141] libmachine: (ha-138000-m03) Calling .DriverName
	I0815 16:31:18.629506    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetState
	I0815 16:31:18.629587    3848 main.go:141] libmachine: (ha-138000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:31:18.629688    3848 main.go:141] libmachine: (ha-138000-m03) DBG | hyperkit pid from json: 3119
	I0815 16:31:18.630594    3848 main.go:141] libmachine: (ha-138000-m03) DBG | hyperkit pid 3119 missing from process table
	I0815 16:31:18.630635    3848 fix.go:112] recreateIfNeeded on ha-138000-m03: state=Stopped err=<nil>
	I0815 16:31:18.630646    3848 main.go:141] libmachine: (ha-138000-m03) Calling .DriverName
	W0815 16:31:18.630738    3848 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 16:31:18.653953    3848 out.go:177] * Restarting existing hyperkit VM for "ha-138000-m03" ...
	I0815 16:31:18.711722    3848 main.go:141] libmachine: (ha-138000-m03) Calling .Start
	I0815 16:31:18.712041    3848 main.go:141] libmachine: (ha-138000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:31:18.712160    3848 main.go:141] libmachine: (ha-138000-m03) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/hyperkit.pid
	I0815 16:31:18.713734    3848 main.go:141] libmachine: (ha-138000-m03) DBG | hyperkit pid 3119 missing from process table
	I0815 16:31:18.713751    3848 main.go:141] libmachine: (ha-138000-m03) DBG | pid 3119 is in state "Stopped"
	I0815 16:31:18.713774    3848 main.go:141] libmachine: (ha-138000-m03) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/hyperkit.pid...
	I0815 16:31:18.713958    3848 main.go:141] libmachine: (ha-138000-m03) DBG | Using UUID 4228381e-4618-4b8b-ac7c-129bf380703a
	I0815 16:31:18.742338    3848 main.go:141] libmachine: (ha-138000-m03) DBG | Generated MAC 9e:18:89:2a:2d:99
	I0815 16:31:18.742370    3848 main.go:141] libmachine: (ha-138000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000
	I0815 16:31:18.742565    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:18 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"4228381e-4618-4b8b-ac7c-129bf380703a", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00037f470)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0815 16:31:18.742609    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:18 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"4228381e-4618-4b8b-ac7c-129bf380703a", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00037f470)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0815 16:31:18.742699    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:18 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "4228381e-4618-4b8b-ac7c-129bf380703a", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/ha-138000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/tty,log=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/bzimage,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-13
8000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000"}
	I0815 16:31:18.742751    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:18 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 4228381e-4618-4b8b-ac7c-129bf380703a -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/ha-138000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/tty,log=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/bzimage,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 co
nsole=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000"
	I0815 16:31:18.742790    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:18 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0815 16:31:18.744551    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:18 DEBUG: hyperkit: Pid is 4186
	I0815 16:31:18.745071    3848 main.go:141] libmachine: (ha-138000-m03) DBG | Attempt 0
	I0815 16:31:18.745087    3848 main.go:141] libmachine: (ha-138000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:31:18.745163    3848 main.go:141] libmachine: (ha-138000-m03) DBG | hyperkit pid from json: 4186
	I0815 16:31:18.746856    3848 main.go:141] libmachine: (ha-138000-m03) DBG | Searching for 9e:18:89:2a:2d:99 in /var/db/dhcpd_leases ...
	I0815 16:31:18.746937    3848 main.go:141] libmachine: (ha-138000-m03) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0815 16:31:18.746955    3848 main.go:141] libmachine: (ha-138000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:31:18.746980    3848 main.go:141] libmachine: (ha-138000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:31:18.746991    3848 main.go:141] libmachine: (ha-138000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be8e15}
	I0815 16:31:18.747032    3848 main.go:141] libmachine: (ha-138000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfdedc}
	I0815 16:31:18.747039    3848 main.go:141] libmachine: (ha-138000-m03) DBG | Found match: 9e:18:89:2a:2d:99
	I0815 16:31:18.747040    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetConfigRaw
	I0815 16:31:18.747045    3848 main.go:141] libmachine: (ha-138000-m03) DBG | IP: 192.169.0.7
	I0815 16:31:18.747774    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetIP
	I0815 16:31:18.747963    3848 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/config.json ...
	I0815 16:31:18.748524    3848 machine.go:93] provisionDockerMachine start ...
	I0815 16:31:18.748538    3848 main.go:141] libmachine: (ha-138000-m03) Calling .DriverName
	I0815 16:31:18.748670    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHHostname
	I0815 16:31:18.748765    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHPort
	I0815 16:31:18.748845    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:18.748950    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:18.749050    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHUsername
	I0815 16:31:18.749179    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:31:18.749325    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0815 16:31:18.749333    3848 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 16:31:18.752657    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:18 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0815 16:31:18.760833    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:18 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0815 16:31:18.761721    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0815 16:31:18.761738    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0815 16:31:18.761746    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0815 16:31:18.761755    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0815 16:31:19.145894    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:19 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0815 16:31:19.145910    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:19 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0815 16:31:19.260828    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0815 16:31:19.260843    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0815 16:31:19.260851    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0815 16:31:19.260862    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0815 16:31:19.261711    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:19 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0815 16:31:19.261721    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:19 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0815 16:31:24.888063    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:24 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0815 16:31:24.888137    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:24 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0815 16:31:24.888149    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:24 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0815 16:31:24.911372    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:24 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0815 16:31:29.819902    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 16:31:29.819917    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetMachineName
	I0815 16:31:29.820052    3848 buildroot.go:166] provisioning hostname "ha-138000-m03"
	I0815 16:31:29.820067    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetMachineName
	I0815 16:31:29.820174    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHHostname
	I0815 16:31:29.820268    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHPort
	I0815 16:31:29.820353    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:29.820429    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:29.820504    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHUsername
	I0815 16:31:29.820626    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:31:29.820777    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0815 16:31:29.820785    3848 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-138000-m03 && echo "ha-138000-m03" | sudo tee /etc/hostname
	I0815 16:31:29.898224    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-138000-m03
	
	I0815 16:31:29.898247    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHHostname
	I0815 16:31:29.898395    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHPort
	I0815 16:31:29.898481    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:29.898567    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:29.898654    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHUsername
	I0815 16:31:29.898789    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:31:29.898974    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0815 16:31:29.898986    3848 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-138000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-138000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-138000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 16:31:29.968919    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 16:31:29.968938    3848 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19452-977/.minikube CaCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19452-977/.minikube}
	I0815 16:31:29.968947    3848 buildroot.go:174] setting up certificates
	I0815 16:31:29.968952    3848 provision.go:84] configureAuth start
	I0815 16:31:29.968959    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetMachineName
	I0815 16:31:29.969088    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetIP
	I0815 16:31:29.969172    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHHostname
	I0815 16:31:29.969251    3848 provision.go:143] copyHostCerts
	I0815 16:31:29.969278    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem
	I0815 16:31:29.969343    3848 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem, removing ...
	I0815 16:31:29.969348    3848 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem
	I0815 16:31:29.969482    3848 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem (1082 bytes)
	I0815 16:31:29.969678    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem
	I0815 16:31:29.969716    3848 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem, removing ...
	I0815 16:31:29.969721    3848 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem
	I0815 16:31:29.969830    3848 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem (1123 bytes)
	I0815 16:31:29.969984    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem
	I0815 16:31:29.970023    3848 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem, removing ...
	I0815 16:31:29.970028    3848 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem
	I0815 16:31:29.970129    3848 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem (1679 bytes)
	I0815 16:31:29.970281    3848 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem org=jenkins.ha-138000-m03 san=[127.0.0.1 192.169.0.7 ha-138000-m03 localhost minikube]
	I0815 16:31:30.063220    3848 provision.go:177] copyRemoteCerts
	I0815 16:31:30.063270    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 16:31:30.063286    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHHostname
	I0815 16:31:30.063426    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHPort
	I0815 16:31:30.063510    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:30.063603    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHUsername
	I0815 16:31:30.063685    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/id_rsa Username:docker}
	I0815 16:31:30.101783    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0815 16:31:30.101861    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 16:31:30.121792    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0815 16:31:30.121868    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 16:31:30.141970    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0815 16:31:30.142077    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0815 16:31:30.161960    3848 provision.go:87] duration metric: took 192.993235ms to configureAuth
	I0815 16:31:30.161983    3848 buildroot.go:189] setting minikube options for container-runtime
	I0815 16:31:30.162167    3848 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:31:30.162199    3848 main.go:141] libmachine: (ha-138000-m03) Calling .DriverName
	I0815 16:31:30.162337    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHHostname
	I0815 16:31:30.162430    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHPort
	I0815 16:31:30.162521    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:30.162598    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:30.162675    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHUsername
	I0815 16:31:30.162784    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:31:30.162913    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0815 16:31:30.162921    3848 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0815 16:31:30.228685    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0815 16:31:30.228697    3848 buildroot.go:70] root file system type: tmpfs
	I0815 16:31:30.228781    3848 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0815 16:31:30.228793    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHHostname
	I0815 16:31:30.228929    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHPort
	I0815 16:31:30.229020    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:30.229108    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:30.229195    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHUsername
	I0815 16:31:30.229313    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:31:30.229444    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0815 16:31:30.229494    3848 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0815 16:31:30.305200    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0815 16:31:30.305217    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHHostname
	I0815 16:31:30.305352    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHPort
	I0815 16:31:30.305448    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:30.305543    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:30.305648    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHUsername
	I0815 16:31:30.305802    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:31:30.305948    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0815 16:31:30.305961    3848 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0815 16:31:31.969522    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0815 16:31:31.969536    3848 machine.go:96] duration metric: took 13.221047415s to provisionDockerMachine
	I0815 16:31:31.969548    3848 start.go:293] postStartSetup for "ha-138000-m03" (driver="hyperkit")
	I0815 16:31:31.969555    3848 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 16:31:31.969566    3848 main.go:141] libmachine: (ha-138000-m03) Calling .DriverName
	I0815 16:31:31.969757    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 16:31:31.969772    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHHostname
	I0815 16:31:31.969871    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHPort
	I0815 16:31:31.969976    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:31.970054    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHUsername
	I0815 16:31:31.970139    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/id_rsa Username:docker}
	I0815 16:31:32.013928    3848 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 16:31:32.017159    3848 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 16:31:32.017170    3848 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19452-977/.minikube/addons for local assets ...
	I0815 16:31:32.017274    3848 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19452-977/.minikube/files for local assets ...
	I0815 16:31:32.017462    3848 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> 14982.pem in /etc/ssl/certs
	I0815 16:31:32.017468    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> /etc/ssl/certs/14982.pem
	I0815 16:31:32.017677    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 16:31:32.029028    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem --> /etc/ssl/certs/14982.pem (1708 bytes)
	I0815 16:31:32.059130    3848 start.go:296] duration metric: took 89.573356ms for postStartSetup
	I0815 16:31:32.059162    3848 main.go:141] libmachine: (ha-138000-m03) Calling .DriverName
	I0815 16:31:32.059341    3848 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0815 16:31:32.059355    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHHostname
	I0815 16:31:32.059449    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHPort
	I0815 16:31:32.059534    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:32.059624    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHUsername
	I0815 16:31:32.059708    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/id_rsa Username:docker}
	I0815 16:31:32.098694    3848 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0815 16:31:32.098758    3848 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0815 16:31:32.152993    3848 fix.go:56] duration metric: took 13.533862474s for fixHost
	I0815 16:31:32.153017    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHHostname
	I0815 16:31:32.153168    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHPort
	I0815 16:31:32.153266    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:32.153360    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:32.153453    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHUsername
	I0815 16:31:32.153579    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:31:32.153719    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0815 16:31:32.153727    3848 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 16:31:32.220010    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723764692.474550074
	
	I0815 16:31:32.220026    3848 fix.go:216] guest clock: 1723764692.474550074
	I0815 16:31:32.220031    3848 fix.go:229] Guest: 2024-08-15 16:31:32.474550074 -0700 PDT Remote: 2024-08-15 16:31:32.153007 -0700 PDT m=+98.155027601 (delta=321.543074ms)
	I0815 16:31:32.220043    3848 fix.go:200] guest clock delta is within tolerance: 321.543074ms
	I0815 16:31:32.220047    3848 start.go:83] releasing machines lock for "ha-138000-m03", held for 13.600937599s
	I0815 16:31:32.220063    3848 main.go:141] libmachine: (ha-138000-m03) Calling .DriverName
	I0815 16:31:32.220193    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetIP
	I0815 16:31:32.242484    3848 out.go:177] * Found network options:
	I0815 16:31:32.262540    3848 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W0815 16:31:32.284750    3848 proxy.go:119] fail to check proxy env: Error ip not in block
	W0815 16:31:32.284780    3848 proxy.go:119] fail to check proxy env: Error ip not in block
	I0815 16:31:32.284808    3848 main.go:141] libmachine: (ha-138000-m03) Calling .DriverName
	I0815 16:31:32.285357    3848 main.go:141] libmachine: (ha-138000-m03) Calling .DriverName
	I0815 16:31:32.285486    3848 main.go:141] libmachine: (ha-138000-m03) Calling .DriverName
	I0815 16:31:32.285580    3848 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 16:31:32.285610    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHHostname
	W0815 16:31:32.285635    3848 proxy.go:119] fail to check proxy env: Error ip not in block
	W0815 16:31:32.285649    3848 proxy.go:119] fail to check proxy env: Error ip not in block
	I0815 16:31:32.285725    3848 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0815 16:31:32.285743    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHHostname
	I0815 16:31:32.285746    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHPort
	I0815 16:31:32.285912    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHPort
	I0815 16:31:32.285930    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:32.286051    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:32.286078    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHUsername
	I0815 16:31:32.286176    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHUsername
	I0815 16:31:32.286220    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/id_rsa Username:docker}
	I0815 16:31:32.286297    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/id_rsa Username:docker}
	W0815 16:31:32.322271    3848 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 16:31:32.322331    3848 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 16:31:32.369504    3848 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 16:31:32.369521    3848 start.go:495] detecting cgroup driver to use...
	I0815 16:31:32.369607    3848 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 16:31:32.385397    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0815 16:31:32.393793    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0815 16:31:32.401893    3848 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0815 16:31:32.401954    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0815 16:31:32.410021    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 16:31:32.418144    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0815 16:31:32.426371    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 16:31:32.434583    3848 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 16:31:32.442902    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0815 16:31:32.451254    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0815 16:31:32.459565    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0815 16:31:32.467863    3848 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 16:31:32.475226    3848 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 16:31:32.482724    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:31:32.583602    3848 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0815 16:31:32.603710    3848 start.go:495] detecting cgroup driver to use...
	I0815 16:31:32.603796    3848 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0815 16:31:32.620091    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 16:31:32.633248    3848 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 16:31:32.652532    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 16:31:32.666138    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0815 16:31:32.676424    3848 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0815 16:31:32.697061    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0815 16:31:32.707503    3848 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 16:31:32.722896    3848 ssh_runner.go:195] Run: which cri-dockerd
	I0815 16:31:32.725902    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0815 16:31:32.733526    3848 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0815 16:31:32.747908    3848 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0815 16:31:32.853084    3848 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0815 16:31:32.953384    3848 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0815 16:31:32.953408    3848 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0815 16:31:32.968013    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:31:33.073760    3848 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0815 16:31:35.380632    3848 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.306859581s)
	I0815 16:31:35.380695    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0815 16:31:35.391776    3848 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0815 16:31:35.404750    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0815 16:31:35.414823    3848 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0815 16:31:35.508250    3848 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0815 16:31:35.605930    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:31:35.720643    3848 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0815 16:31:35.734388    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0815 16:31:35.745523    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:31:35.849768    3848 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0815 16:31:35.916223    3848 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0815 16:31:35.916311    3848 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0815 16:31:35.920652    3848 start.go:563] Will wait 60s for crictl version
	I0815 16:31:35.920712    3848 ssh_runner.go:195] Run: which crictl
	I0815 16:31:35.923687    3848 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 16:31:35.951143    3848 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0815 16:31:35.951216    3848 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0815 16:31:35.970702    3848 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0815 16:31:36.011114    3848 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0815 16:31:36.053083    3848 out.go:177]   - env NO_PROXY=192.169.0.5
	I0815 16:31:36.074064    3848 out.go:177]   - env NO_PROXY=192.169.0.5,192.169.0.6
	I0815 16:31:36.094992    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetIP
	I0815 16:31:36.095254    3848 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0815 16:31:36.098563    3848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 16:31:36.107924    3848 mustload.go:65] Loading cluster: ha-138000
	I0815 16:31:36.108121    3848 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:31:36.108349    3848 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:31:36.108371    3848 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:31:36.117631    3848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52302
	I0815 16:31:36.118004    3848 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:31:36.118362    3848 main.go:141] libmachine: Using API Version  1
	I0815 16:31:36.118373    3848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:31:36.118572    3848 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:31:36.118683    3848 main.go:141] libmachine: (ha-138000) Calling .GetState
	I0815 16:31:36.118769    3848 main.go:141] libmachine: (ha-138000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:31:36.118858    3848 main.go:141] libmachine: (ha-138000) DBG | hyperkit pid from json: 3862
	I0815 16:31:36.119807    3848 host.go:66] Checking if "ha-138000" exists ...
	I0815 16:31:36.120056    3848 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:31:36.120079    3848 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:31:36.128888    3848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52304
	I0815 16:31:36.129245    3848 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:31:36.129613    3848 main.go:141] libmachine: Using API Version  1
	I0815 16:31:36.129628    3848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:31:36.129838    3848 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:31:36.129960    3848 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:31:36.130061    3848 certs.go:68] Setting up /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000 for IP: 192.169.0.7
	I0815 16:31:36.130067    3848 certs.go:194] generating shared ca certs ...
	I0815 16:31:36.130076    3848 certs.go:226] acquiring lock for ca certs: {Name:mka1c88019f6064d2983ac988db71a67aeb65696 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:31:36.130237    3848 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.key
	I0815 16:31:36.130321    3848 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.key
	I0815 16:31:36.130330    3848 certs.go:256] generating profile certs ...
	I0815 16:31:36.130443    3848 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/client.key
	I0815 16:31:36.130530    3848 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.key.c7e1c29f
	I0815 16:31:36.130604    3848 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.key
	I0815 16:31:36.130617    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0815 16:31:36.130638    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0815 16:31:36.130658    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0815 16:31:36.130676    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0815 16:31:36.130694    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0815 16:31:36.130735    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0815 16:31:36.130766    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0815 16:31:36.130785    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0815 16:31:36.130871    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498.pem (1338 bytes)
	W0815 16:31:36.130920    3848 certs.go:480] ignoring /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498_empty.pem, impossibly tiny 0 bytes
	I0815 16:31:36.130928    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 16:31:36.130977    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem (1082 bytes)
	I0815 16:31:36.131019    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem (1123 bytes)
	I0815 16:31:36.131050    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem (1679 bytes)
	I0815 16:31:36.131116    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem (1708 bytes)
	I0815 16:31:36.131153    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498.pem -> /usr/share/ca-certificates/1498.pem
	I0815 16:31:36.131174    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> /usr/share/ca-certificates/14982.pem
	I0815 16:31:36.131191    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:31:36.131214    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:31:36.131305    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:31:36.131384    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:31:36.131503    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:31:36.131582    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/id_rsa Username:docker}
	I0815 16:31:36.163135    3848 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0815 16:31:36.167195    3848 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0815 16:31:36.177598    3848 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0815 16:31:36.181380    3848 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0815 16:31:36.190596    3848 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0815 16:31:36.194001    3848 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0815 16:31:36.202689    3848 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0815 16:31:36.205906    3848 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0815 16:31:36.214386    3848 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0815 16:31:36.217472    3848 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0815 16:31:36.226235    3848 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0815 16:31:36.229561    3848 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0815 16:31:36.238534    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 16:31:36.259009    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 16:31:36.279081    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 16:31:36.299147    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 16:31:36.319142    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0815 16:31:36.339480    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 16:31:36.359157    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 16:31:36.379445    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 16:31:36.399731    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498.pem --> /usr/share/ca-certificates/1498.pem (1338 bytes)
	I0815 16:31:36.419506    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem --> /usr/share/ca-certificates/14982.pem (1708 bytes)
	I0815 16:31:36.439172    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 16:31:36.458742    3848 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0815 16:31:36.472323    3848 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0815 16:31:36.486349    3848 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0815 16:31:36.500064    3848 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0815 16:31:36.513680    3848 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0815 16:31:36.527778    3848 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0815 16:31:36.541967    3848 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0815 16:31:36.555903    3848 ssh_runner.go:195] Run: openssl version
	I0815 16:31:36.560554    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14982.pem && ln -fs /usr/share/ca-certificates/14982.pem /etc/ssl/certs/14982.pem"
	I0815 16:31:36.569772    3848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14982.pem
	I0815 16:31:36.573086    3848 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 23:14 /usr/share/ca-certificates/14982.pem
	I0815 16:31:36.573133    3848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14982.pem
	I0815 16:31:36.577434    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14982.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 16:31:36.585945    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 16:31:36.594481    3848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:31:36.598014    3848 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:05 /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:31:36.598056    3848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:31:36.602322    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 16:31:36.611545    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1498.pem && ln -fs /usr/share/ca-certificates/1498.pem /etc/ssl/certs/1498.pem"
	I0815 16:31:36.620267    3848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1498.pem
	I0815 16:31:36.623763    3848 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 23:14 /usr/share/ca-certificates/1498.pem
	I0815 16:31:36.623818    3848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1498.pem
	I0815 16:31:36.628404    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1498.pem /etc/ssl/certs/51391683.0"
	I0815 16:31:36.637260    3848 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 16:31:36.640760    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 16:31:36.645076    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 16:31:36.649285    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 16:31:36.653546    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 16:31:36.657801    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 16:31:36.662041    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 16:31:36.666218    3848 kubeadm.go:934] updating node {m03 192.169.0.7 8443 v1.31.0 docker true true} ...
	I0815 16:31:36.666285    3848 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-138000-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-138000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 16:31:36.666303    3848 kube-vip.go:115] generating kube-vip config ...
	I0815 16:31:36.666340    3848 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0815 16:31:36.678617    3848 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0815 16:31:36.678664    3848 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0815 16:31:36.678722    3848 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 16:31:36.686802    3848 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 16:31:36.686869    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0815 16:31:36.694600    3848 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0815 16:31:36.708358    3848 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 16:31:36.721865    3848 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0815 16:31:36.736604    3848 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0815 16:31:36.739496    3848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 16:31:36.748868    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:31:36.847387    3848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 16:31:36.862652    3848 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 16:31:36.862839    3848 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:31:36.884247    3848 out.go:177] * Verifying Kubernetes components...
	I0815 16:31:36.904597    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:31:37.032729    3848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 16:31:37.044674    3848 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19452-977/kubeconfig
	I0815 16:31:37.044869    3848 kapi.go:59] client config for ha-138000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/client.key", CAFile:"/Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, U
serAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x3a6cf60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0815 16:31:37.044913    3848 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0815 16:31:37.045078    3848 node_ready.go:35] waiting up to 6m0s for node "ha-138000-m03" to be "Ready" ...
	I0815 16:31:37.045127    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:37.045132    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:37.045138    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:37.045142    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:37.047558    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:37.545663    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:37.545719    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:37.545727    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:37.545756    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:37.548346    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:37.548775    3848 node_ready.go:49] node "ha-138000-m03" has status "Ready":"True"
	I0815 16:31:37.548786    3848 node_ready.go:38] duration metric: took 503.701087ms for node "ha-138000-m03" to be "Ready" ...
	I0815 16:31:37.548799    3848 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 16:31:37.548839    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0815 16:31:37.548848    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:37.548854    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:37.548859    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:37.555174    3848 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0815 16:31:37.561193    3848 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-dmgt5" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:37.561251    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-dmgt5
	I0815 16:31:37.561256    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:37.561262    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:37.561267    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:37.563487    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:37.564065    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:37.564072    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:37.564078    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:37.564081    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:37.566147    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:37.566458    3848 pod_ready.go:93] pod "coredns-6f6b679f8f-dmgt5" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:37.566468    3848 pod_ready.go:82] duration metric: took 5.259716ms for pod "coredns-6f6b679f8f-dmgt5" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:37.566475    3848 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-zc8jj" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:37.566514    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-zc8jj
	I0815 16:31:37.566519    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:37.566525    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:37.566529    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:37.568717    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:37.569347    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:37.569355    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:37.569361    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:37.569365    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:37.571508    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:37.571903    3848 pod_ready.go:93] pod "coredns-6f6b679f8f-zc8jj" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:37.571913    3848 pod_ready.go:82] duration metric: took 5.431792ms for pod "coredns-6f6b679f8f-zc8jj" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:37.571919    3848 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:37.571962    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000
	I0815 16:31:37.571967    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:37.571973    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:37.571976    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:37.574222    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:37.574650    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:37.574659    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:37.574665    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:37.574669    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:37.576917    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:37.577415    3848 pod_ready.go:93] pod "etcd-ha-138000" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:37.577426    3848 pod_ready.go:82] duration metric: took 5.501032ms for pod "etcd-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:37.577433    3848 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:37.577470    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m02
	I0815 16:31:37.577478    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:37.577485    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:37.577489    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:37.579610    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:37.580030    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:31:37.580038    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:37.580044    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:37.580049    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:37.582713    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:37.583250    3848 pod_ready.go:93] pod "etcd-ha-138000-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:37.583261    3848 pod_ready.go:82] duration metric: took 5.823471ms for pod "etcd-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:37.583269    3848 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:37.745749    3848 request.go:632] Waited for 162.439343ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:37.745806    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:37.745816    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:37.745824    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:37.745836    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:37.748134    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:37.945907    3848 request.go:632] Waited for 197.272516ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:37.945950    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:37.945956    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:37.945962    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:37.945966    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:37.948855    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:38.146195    3848 request.go:632] Waited for 62.814852ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:38.146243    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:38.146249    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:38.146296    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:38.146301    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:38.149137    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:38.346943    3848 request.go:632] Waited for 197.306674ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:38.346985    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:38.346994    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:38.347003    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:38.347010    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:38.349878    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:38.583459    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:38.583505    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:38.583514    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:38.583520    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:38.590031    3848 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0815 16:31:38.745745    3848 request.go:632] Waited for 155.336663ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:38.745818    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:38.745825    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:38.745831    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:38.745836    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:38.748530    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:39.083990    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:39.084003    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:39.084009    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:39.084013    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:39.086519    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:39.146468    3848 request.go:632] Waited for 59.248658ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:39.146510    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:39.146515    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:39.146521    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:39.146525    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:39.148504    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:39.583999    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:39.584017    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:39.584026    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:39.584029    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:39.589510    3848 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 16:31:39.590427    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:39.590438    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:39.590445    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:39.590449    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:39.592655    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:39.593056    3848 pod_ready.go:103] pod "etcd-ha-138000-m03" in "kube-system" namespace has status "Ready":"False"
	I0815 16:31:40.084185    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:40.084202    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:40.084209    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:40.084214    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:40.086419    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:40.087158    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:40.087166    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:40.087172    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:40.087196    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:40.088975    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:40.584037    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:40.584051    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:40.584058    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:40.584061    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:40.586450    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:40.586944    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:40.586952    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:40.586958    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:40.586963    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:40.589014    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:41.083405    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:41.083421    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:41.083427    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:41.083433    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:41.086228    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:41.086971    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:41.086978    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:41.086985    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:41.086990    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:41.097843    3848 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0815 16:31:41.583963    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:41.583987    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:41.583999    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:41.584008    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:41.587268    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:41.588066    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:41.588074    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:41.588079    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:41.588083    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:41.589716    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:42.083443    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:42.083462    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:42.083471    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:42.083482    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:42.085751    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:42.086179    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:42.086187    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:42.086194    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:42.086197    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:42.087825    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:42.088133    3848 pod_ready.go:103] pod "etcd-ha-138000-m03" in "kube-system" namespace has status "Ready":"False"
	I0815 16:31:42.584042    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:42.584070    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:42.584081    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:42.584089    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:42.587530    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:42.588287    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:42.588295    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:42.588301    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:42.588305    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:42.589868    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:43.085149    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:43.085164    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:43.085170    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:43.085174    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:43.087319    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:43.087818    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:43.087825    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:43.087831    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:43.087834    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:43.089562    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:43.583720    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:43.583737    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:43.583744    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:43.583747    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:43.586238    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:43.586831    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:43.586842    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:43.586849    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:43.586852    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:43.589092    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:44.084178    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:44.084189    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:44.084195    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:44.084198    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:44.086364    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:44.086790    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:44.086798    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:44.086805    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:44.086809    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:44.088812    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:44.089107    3848 pod_ready.go:103] pod "etcd-ha-138000-m03" in "kube-system" namespace has status "Ready":"False"
	I0815 16:31:44.584718    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:44.584743    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:44.584755    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:44.584763    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:44.587851    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:44.588606    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:44.588615    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:44.588621    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:44.588624    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:44.590403    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:45.083471    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:45.083486    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:45.083492    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:45.083496    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:45.085722    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:45.086170    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:45.086177    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:45.086186    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:45.086189    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:45.087992    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:45.583684    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:45.583761    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:45.583775    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:45.583782    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:45.586696    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:45.587281    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:45.587292    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:45.587300    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:45.587305    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:45.588851    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:46.083567    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:46.083581    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:46.083590    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:46.083595    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:46.086254    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:46.086706    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:46.086714    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:46.086720    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:46.086724    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:46.088505    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:46.583431    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:46.583454    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:46.583474    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:46.583477    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:46.586641    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:46.587367    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:46.587376    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:46.587383    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:46.587389    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:46.590271    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:46.590924    3848 pod_ready.go:103] pod "etcd-ha-138000-m03" in "kube-system" namespace has status "Ready":"False"
	I0815 16:31:47.085070    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:47.085088    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:47.085094    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:47.085097    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:47.087411    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:47.087834    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:47.087841    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:47.087847    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:47.087856    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:47.089857    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:47.583460    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:47.583510    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:47.583537    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:47.583547    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:47.586412    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:47.587147    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:47.587155    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:47.587161    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:47.587164    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:47.589077    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:48.084130    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:48.084172    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:48.084180    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:48.084184    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:48.086241    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:48.086700    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:48.086708    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:48.086715    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:48.086719    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:48.088392    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:48.583712    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:48.583726    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:48.583733    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:48.583736    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:48.585950    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:48.586404    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:48.586411    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:48.586417    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:48.586420    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:48.588064    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:49.084795    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:49.084810    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:49.084817    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:49.084821    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:49.087201    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:49.087638    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:49.087646    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:49.087651    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:49.087655    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:49.089294    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:49.089762    3848 pod_ready.go:103] pod "etcd-ha-138000-m03" in "kube-system" namespace has status "Ready":"False"
	I0815 16:31:49.584532    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:49.584586    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:49.584596    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:49.584602    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:49.586828    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:49.587368    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:49.587376    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:49.587381    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:49.587386    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:49.589092    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:50.084677    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:50.084702    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:50.084714    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:50.084720    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:50.090233    3848 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 16:31:50.091082    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:50.091090    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:50.091095    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:50.091098    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:50.093397    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:50.584557    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:50.584594    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:50.584607    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:50.584614    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:50.587331    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:50.588105    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:50.588113    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:50.588119    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:50.588122    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:50.589783    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:51.084222    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:51.084238    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:51.084245    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:51.084249    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:51.086498    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:51.086853    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:51.086860    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:51.086866    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:51.086869    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:51.088548    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:51.583648    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:51.583662    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:51.583669    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:51.583673    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:51.585837    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:51.586356    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:51.586364    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:51.586370    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:51.586374    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:51.588027    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:51.588324    3848 pod_ready.go:103] pod "etcd-ha-138000-m03" in "kube-system" namespace has status "Ready":"False"
	I0815 16:31:52.083439    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:52.083464    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:52.083477    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:52.083486    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:52.086839    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:52.087326    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:52.087334    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:52.087340    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:52.087344    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:52.089021    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:52.089421    3848 pod_ready.go:93] pod "etcd-ha-138000-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:52.089431    3848 pod_ready.go:82] duration metric: took 14.506206257s for pod "etcd-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:52.089443    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:52.089476    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000
	I0815 16:31:52.089481    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:52.089487    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:52.089490    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:52.091044    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:52.091506    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:52.091513    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:52.091519    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:52.091522    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:52.093067    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:52.093523    3848 pod_ready.go:93] pod "kube-apiserver-ha-138000" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:52.093534    3848 pod_ready.go:82] duration metric: took 4.083615ms for pod "kube-apiserver-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:52.093540    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:52.093569    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:31:52.093574    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:52.093579    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:52.093583    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:52.096079    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:52.096682    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:31:52.096689    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:52.096695    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:52.096698    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:52.098629    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:52.099014    3848 pod_ready.go:93] pod "kube-apiserver-ha-138000-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:52.099023    3848 pod_ready.go:82] duration metric: took 5.477344ms for pod "kube-apiserver-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:52.099030    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:52.099060    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m03
	I0815 16:31:52.099065    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:52.099071    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:52.099075    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:52.100773    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:52.101171    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:52.101178    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:52.101184    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:52.101188    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:52.108504    3848 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0815 16:31:52.599355    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m03
	I0815 16:31:52.599371    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:52.599378    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:52.599380    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:52.603474    3848 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 16:31:52.603827    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:52.603834    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:52.603839    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:52.603842    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:52.607400    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:53.100426    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m03
	I0815 16:31:53.100452    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:53.100465    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:53.100469    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:53.103591    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:53.103977    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:53.103985    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:53.103991    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:53.103995    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:53.105550    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:53.600030    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m03
	I0815 16:31:53.600056    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:53.600098    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:53.600106    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:53.603820    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:53.604279    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:53.604287    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:53.604292    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:53.604302    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:53.605948    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:54.100215    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m03
	I0815 16:31:54.100240    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:54.100248    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:54.100254    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:54.103639    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:54.104211    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:54.104222    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:54.104230    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:54.104236    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:54.106285    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:54.106596    3848 pod_ready.go:103] pod "kube-apiserver-ha-138000-m03" in "kube-system" namespace has status "Ready":"False"
	I0815 16:31:54.600238    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m03
	I0815 16:31:54.600262    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:54.600275    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:54.600280    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:54.603528    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:54.604248    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:54.604259    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:54.604268    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:54.604276    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:54.606261    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:54.606605    3848 pod_ready.go:93] pod "kube-apiserver-ha-138000-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:54.606614    3848 pod_ready.go:82] duration metric: took 2.507587207s for pod "kube-apiserver-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:54.606621    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:54.606652    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:54.606657    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:54.606663    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:54.606677    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:54.608196    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:54.608645    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:54.608652    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:54.608658    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:54.608661    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:54.610174    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:54.610543    3848 pod_ready.go:93] pod "kube-controller-manager-ha-138000" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:54.610551    3848 pod_ready.go:82] duration metric: took 3.924647ms for pod "kube-controller-manager-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:54.610565    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:54.610597    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000-m02
	I0815 16:31:54.610601    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:54.610607    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:54.610611    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:54.612220    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:54.612637    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:31:54.612644    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:54.612648    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:54.612652    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:54.614115    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:54.614453    3848 pod_ready.go:93] pod "kube-controller-manager-ha-138000-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:54.614461    3848 pod_ready.go:82] duration metric: took 3.890604ms for pod "kube-controller-manager-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:54.614467    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:54.685393    3848 request.go:632] Waited for 70.886034ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000-m03
	I0815 16:31:54.685542    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000-m03
	I0815 16:31:54.685554    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:54.685565    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:54.685572    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:54.689462    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:54.884047    3848 request.go:632] Waited for 194.079873ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:54.884179    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:54.884194    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:54.884206    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:54.884216    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:54.887378    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:54.887638    3848 pod_ready.go:93] pod "kube-controller-manager-ha-138000-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:54.887648    3848 pod_ready.go:82] duration metric: took 273.176916ms for pod "kube-controller-manager-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:54.887655    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cznkn" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:55.084696    3848 request.go:632] Waited for 197.006461ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cznkn
	I0815 16:31:55.084754    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cznkn
	I0815 16:31:55.084760    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:55.084766    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:55.084770    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:55.086486    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:55.284932    3848 request.go:632] Waited for 198.019424ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:55.285014    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:55.285023    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:55.285031    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:55.285034    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:55.287587    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:55.288003    3848 pod_ready.go:93] pod "kube-proxy-cznkn" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:55.288012    3848 pod_ready.go:82] duration metric: took 400.352996ms for pod "kube-proxy-cznkn" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:55.288019    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kxghx" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:55.484813    3848 request.go:632] Waited for 196.749045ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kxghx
	I0815 16:31:55.484909    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kxghx
	I0815 16:31:55.484933    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:55.484946    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:55.484952    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:55.487936    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:55.684903    3848 request.go:632] Waited for 196.468256ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:55.684989    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:55.684999    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:55.685010    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:55.685019    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:55.688164    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:55.688606    3848 pod_ready.go:93] pod "kube-proxy-kxghx" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:55.688619    3848 pod_ready.go:82] duration metric: took 400.595564ms for pod "kube-proxy-kxghx" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:55.688628    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qpth7" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:55.884647    3848 request.go:632] Waited for 195.972571ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qpth7
	I0815 16:31:55.884703    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qpth7
	I0815 16:31:55.884734    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:55.884828    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:55.884842    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:55.887780    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:56.085059    3848 request.go:632] Waited for 196.76753ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m04
	I0815 16:31:56.085155    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m04
	I0815 16:31:56.085166    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:56.085178    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:56.085187    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:56.088438    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:56.088843    3848 pod_ready.go:98] node "ha-138000-m04" hosting pod "kube-proxy-qpth7" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-138000-m04" has status "Ready":"Unknown"
	I0815 16:31:56.088858    3848 pod_ready.go:82] duration metric: took 400.224535ms for pod "kube-proxy-qpth7" in "kube-system" namespace to be "Ready" ...
	E0815 16:31:56.088867    3848 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-138000-m04" hosting pod "kube-proxy-qpth7" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-138000-m04" has status "Ready":"Unknown"
	I0815 16:31:56.088873    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tf79g" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:56.284412    3848 request.go:632] Waited for 195.467169ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tf79g
	I0815 16:31:56.284533    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tf79g
	I0815 16:31:56.284544    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:56.284556    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:56.284567    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:56.287997    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:56.483641    3848 request.go:632] Waited for 195.132786ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:31:56.483717    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:31:56.483778    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:56.483801    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:56.483810    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:56.486922    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:56.487377    3848 pod_ready.go:93] pod "kube-proxy-tf79g" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:56.487387    3848 pod_ready.go:82] duration metric: took 398.50917ms for pod "kube-proxy-tf79g" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:56.487394    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:56.684509    3848 request.go:632] Waited for 197.075187ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000
	I0815 16:31:56.684584    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000
	I0815 16:31:56.684592    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:56.684600    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:56.684606    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:56.687177    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:56.884267    3848 request.go:632] Waited for 196.705982ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:56.884375    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:56.884384    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:56.884392    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:56.884396    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:56.886486    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:56.886846    3848 pod_ready.go:93] pod "kube-scheduler-ha-138000" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:56.886854    3848 pod_ready.go:82] duration metric: took 399.455831ms for pod "kube-scheduler-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:56.886860    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:57.083869    3848 request.go:632] Waited for 196.961301ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000-m02
	I0815 16:31:57.083950    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000-m02
	I0815 16:31:57.083960    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:57.083983    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:57.083992    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:57.087081    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:57.285517    3848 request.go:632] Waited for 197.962246ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:31:57.285639    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:31:57.285649    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:57.285659    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:57.285667    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:57.288947    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:57.289317    3848 pod_ready.go:93] pod "kube-scheduler-ha-138000-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:57.289331    3848 pod_ready.go:82] duration metric: took 402.465658ms for pod "kube-scheduler-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:57.289340    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:57.483919    3848 request.go:632] Waited for 194.531212ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000-m03
	I0815 16:31:57.484018    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000-m03
	I0815 16:31:57.484029    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:57.484041    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:57.484049    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:57.486736    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:57.683533    3848 request.go:632] Waited for 196.372817ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:57.683619    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:57.683630    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:57.683642    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:57.683649    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:57.686767    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:57.687131    3848 pod_ready.go:93] pod "kube-scheduler-ha-138000-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:57.687146    3848 pod_ready.go:82] duration metric: took 397.799248ms for pod "kube-scheduler-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:57.687155    3848 pod_ready.go:39] duration metric: took 20.138416099s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 16:31:57.687170    3848 api_server.go:52] waiting for apiserver process to appear ...
	I0815 16:31:57.687237    3848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 16:31:57.700597    3848 api_server.go:72] duration metric: took 20.837986375s to wait for apiserver process to appear ...
	I0815 16:31:57.700610    3848 api_server.go:88] waiting for apiserver healthz status ...
	I0815 16:31:57.700622    3848 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0815 16:31:57.703621    3848 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0815 16:31:57.703653    3848 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0815 16:31:57.703658    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:57.703664    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:57.703670    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:57.704168    3848 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0815 16:31:57.704198    3848 api_server.go:141] control plane version: v1.31.0
	I0815 16:31:57.704207    3848 api_server.go:131] duration metric: took 3.590796ms to wait for apiserver health ...
	I0815 16:31:57.704213    3848 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 16:31:57.884532    3848 request.go:632] Waited for 180.27549ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0815 16:31:57.884634    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0815 16:31:57.884645    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:57.884656    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:57.884661    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:57.889257    3848 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 16:31:57.894492    3848 system_pods.go:59] 26 kube-system pods found
	I0815 16:31:57.894504    3848 system_pods.go:61] "coredns-6f6b679f8f-dmgt5" [47d73953-ec2c-4f17-b2b8-d6a9b5e5a316] Running
	I0815 16:31:57.894508    3848 system_pods.go:61] "coredns-6f6b679f8f-zc8jj" [b4a9df39-b09d-4bc3-97f6-b3176ff8e842] Running
	I0815 16:31:57.894511    3848 system_pods.go:61] "etcd-ha-138000" [c2e7e508-a157-4581-a446-2f48e51c0c16] Running
	I0815 16:31:57.894514    3848 system_pods.go:61] "etcd-ha-138000-m02" [4f371d3f-821b-4eae-ab52-5b75d90acd67] Running
	I0815 16:31:57.894516    3848 system_pods.go:61] "etcd-ha-138000-m03" [ebdd90b3-c3b2-44c2-a411-4f6afa67a455] Running
	I0815 16:31:57.894519    3848 system_pods.go:61] "kindnet-77dc6" [24bfc069-9446-43a7-aa61-308c1a62a20e] Running
	I0815 16:31:57.894522    3848 system_pods.go:61] "kindnet-dsvxt" [7645852b-8535-4cb4-8324-15c9b55176e3] Running
	I0815 16:31:57.894525    3848 system_pods.go:61] "kindnet-m887r" [ba31865b-c712-47a8-9fd8-06420270ac8b] Running
	I0815 16:31:57.894527    3848 system_pods.go:61] "kindnet-z6mnx" [fc6211a0-df99-4350-90ba-a18f74bd1bfc] Running
	I0815 16:31:57.894530    3848 system_pods.go:61] "kube-apiserver-ha-138000" [bbc684f7-d8e2-44f2-987a-df9d05cf54fc] Running
	I0815 16:31:57.894534    3848 system_pods.go:61] "kube-apiserver-ha-138000-m02" [c30e129b-f475-4131-a25e-7eeecb39cbea] Running
	I0815 16:31:57.894537    3848 system_pods.go:61] "kube-apiserver-ha-138000-m03" [90304f02-10f2-446d-b731-674df5602401] Running
	I0815 16:31:57.894541    3848 system_pods.go:61] "kube-controller-manager-ha-138000" [1d4148d1-9798-4662-91de-d9a7dae634e2] Running
	I0815 16:31:57.894545    3848 system_pods.go:61] "kube-controller-manager-ha-138000-m02" [ad693dae-cb0a-46ca-955b-33a9465d911a] Running
	I0815 16:31:57.894547    3848 system_pods.go:61] "kube-controller-manager-ha-138000-m03" [11aa509a-dade-4b66-b157-4d4cccb7f349] Running
	I0815 16:31:57.894550    3848 system_pods.go:61] "kube-proxy-cznkn" [61cd1a9a-80fb-4d0e-a4dd-f5ed2d6ddc0f] Running
	I0815 16:31:57.894553    3848 system_pods.go:61] "kube-proxy-kxghx" [27f61dcc-2812-4038-af2a-aa9f46033869] Running
	I0815 16:31:57.894555    3848 system_pods.go:61] "kube-proxy-qpth7" [a343f80b-0fe9-4c88-9782-5fbf9a6170d1] Running
	I0815 16:31:57.894558    3848 system_pods.go:61] "kube-proxy-tf79g" [ef8a2aeb-55c4-436e-a306-da8b93c9707f] Running
	I0815 16:31:57.894560    3848 system_pods.go:61] "kube-scheduler-ha-138000" [ee1d3eb0-596d-4bf0-bed4-dbd38b286e38] Running
	I0815 16:31:57.894563    3848 system_pods.go:61] "kube-scheduler-ha-138000-m02" [1cc29309-49ad-427e-862a-758cc5711eef] Running
	I0815 16:31:57.894566    3848 system_pods.go:61] "kube-scheduler-ha-138000-m03" [2e3efb3c-6903-4306-adec-d4a47b3a56bd] Running
	I0815 16:31:57.894572    3848 system_pods.go:61] "kube-vip-ha-138000" [1b8c66e5-ffaa-4d4b-a220-8eb664b79d91] Running
	I0815 16:31:57.894575    3848 system_pods.go:61] "kube-vip-ha-138000-m02" [a0fe2ba6-1ace-456f-ab2e-15c43303dcad] Running
	I0815 16:31:57.894578    3848 system_pods.go:61] "kube-vip-ha-138000-m03" [44fe2bd5-bd48-4f86-b38a-7e4b3554174c] Running
	I0815 16:31:57.894581    3848 system_pods.go:61] "storage-provisioner" [35f3a9d7-cb68-4b10-82a2-1bd72d8d1aa6] Running
	I0815 16:31:57.894585    3848 system_pods.go:74] duration metric: took 190.369062ms to wait for pod list to return data ...
	I0815 16:31:57.894590    3848 default_sa.go:34] waiting for default service account to be created ...
	I0815 16:31:58.083903    3848 request.go:632] Waited for 189.255195ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0815 16:31:58.083992    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0815 16:31:58.084004    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:58.084016    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:58.084024    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:58.087624    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:58.087687    3848 default_sa.go:45] found service account: "default"
	I0815 16:31:58.087696    3848 default_sa.go:55] duration metric: took 193.101509ms for default service account to be created ...
	I0815 16:31:58.087703    3848 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 16:31:58.284595    3848 request.go:632] Waited for 196.812141ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0815 16:31:58.284716    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0815 16:31:58.284728    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:58.284740    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:58.284748    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:58.290177    3848 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 16:31:58.295724    3848 system_pods.go:86] 26 kube-system pods found
	I0815 16:31:58.295738    3848 system_pods.go:89] "coredns-6f6b679f8f-dmgt5" [47d73953-ec2c-4f17-b2b8-d6a9b5e5a316] Running
	I0815 16:31:58.295742    3848 system_pods.go:89] "coredns-6f6b679f8f-zc8jj" [b4a9df39-b09d-4bc3-97f6-b3176ff8e842] Running
	I0815 16:31:58.295747    3848 system_pods.go:89] "etcd-ha-138000" [c2e7e508-a157-4581-a446-2f48e51c0c16] Running
	I0815 16:31:58.295759    3848 system_pods.go:89] "etcd-ha-138000-m02" [4f371d3f-821b-4eae-ab52-5b75d90acd67] Running
	I0815 16:31:58.295765    3848 system_pods.go:89] "etcd-ha-138000-m03" [ebdd90b3-c3b2-44c2-a411-4f6afa67a455] Running
	I0815 16:31:58.295768    3848 system_pods.go:89] "kindnet-77dc6" [24bfc069-9446-43a7-aa61-308c1a62a20e] Running
	I0815 16:31:58.295779    3848 system_pods.go:89] "kindnet-dsvxt" [7645852b-8535-4cb4-8324-15c9b55176e3] Running
	I0815 16:31:58.295783    3848 system_pods.go:89] "kindnet-m887r" [ba31865b-c712-47a8-9fd8-06420270ac8b] Running
	I0815 16:31:58.295786    3848 system_pods.go:89] "kindnet-z6mnx" [fc6211a0-df99-4350-90ba-a18f74bd1bfc] Running
	I0815 16:31:58.295789    3848 system_pods.go:89] "kube-apiserver-ha-138000" [bbc684f7-d8e2-44f2-987a-df9d05cf54fc] Running
	I0815 16:31:58.295791    3848 system_pods.go:89] "kube-apiserver-ha-138000-m02" [c30e129b-f475-4131-a25e-7eeecb39cbea] Running
	I0815 16:31:58.295795    3848 system_pods.go:89] "kube-apiserver-ha-138000-m03" [90304f02-10f2-446d-b731-674df5602401] Running
	I0815 16:31:58.295798    3848 system_pods.go:89] "kube-controller-manager-ha-138000" [1d4148d1-9798-4662-91de-d9a7dae634e2] Running
	I0815 16:31:58.295801    3848 system_pods.go:89] "kube-controller-manager-ha-138000-m02" [ad693dae-cb0a-46ca-955b-33a9465d911a] Running
	I0815 16:31:58.295804    3848 system_pods.go:89] "kube-controller-manager-ha-138000-m03" [11aa509a-dade-4b66-b157-4d4cccb7f349] Running
	I0815 16:31:58.295807    3848 system_pods.go:89] "kube-proxy-cznkn" [61cd1a9a-80fb-4d0e-a4dd-f5ed2d6ddc0f] Running
	I0815 16:31:58.295814    3848 system_pods.go:89] "kube-proxy-kxghx" [27f61dcc-2812-4038-af2a-aa9f46033869] Running
	I0815 16:31:58.295818    3848 system_pods.go:89] "kube-proxy-qpth7" [a343f80b-0fe9-4c88-9782-5fbf9a6170d1] Running
	I0815 16:31:58.295821    3848 system_pods.go:89] "kube-proxy-tf79g" [ef8a2aeb-55c4-436e-a306-da8b93c9707f] Running
	I0815 16:31:58.295824    3848 system_pods.go:89] "kube-scheduler-ha-138000" [ee1d3eb0-596d-4bf0-bed4-dbd38b286e38] Running
	I0815 16:31:58.295827    3848 system_pods.go:89] "kube-scheduler-ha-138000-m02" [1cc29309-49ad-427e-862a-758cc5711eef] Running
	I0815 16:31:58.295830    3848 system_pods.go:89] "kube-scheduler-ha-138000-m03" [2e3efb3c-6903-4306-adec-d4a47b3a56bd] Running
	I0815 16:31:58.295833    3848 system_pods.go:89] "kube-vip-ha-138000" [1b8c66e5-ffaa-4d4b-a220-8eb664b79d91] Running
	I0815 16:31:58.295836    3848 system_pods.go:89] "kube-vip-ha-138000-m02" [a0fe2ba6-1ace-456f-ab2e-15c43303dcad] Running
	I0815 16:31:58.295838    3848 system_pods.go:89] "kube-vip-ha-138000-m03" [44fe2bd5-bd48-4f86-b38a-7e4b3554174c] Running
	I0815 16:31:58.295841    3848 system_pods.go:89] "storage-provisioner" [35f3a9d7-cb68-4b10-82a2-1bd72d8d1aa6] Running
	I0815 16:31:58.295845    3848 system_pods.go:126] duration metric: took 208.13908ms to wait for k8s-apps to be running ...
	I0815 16:31:58.295851    3848 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 16:31:58.295902    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 16:31:58.307696    3848 system_svc.go:56] duration metric: took 11.840404ms WaitForService to wait for kubelet
	I0815 16:31:58.307710    3848 kubeadm.go:582] duration metric: took 21.445104276s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 16:31:58.307721    3848 node_conditions.go:102] verifying NodePressure condition ...
	I0815 16:31:58.483467    3848 request.go:632] Waited for 175.699042ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0815 16:31:58.483523    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0815 16:31:58.483531    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:58.483546    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:58.483605    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:58.487271    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:58.488234    3848 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 16:31:58.488246    3848 node_conditions.go:123] node cpu capacity is 2
	I0815 16:31:58.488253    3848 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 16:31:58.488256    3848 node_conditions.go:123] node cpu capacity is 2
	I0815 16:31:58.488259    3848 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 16:31:58.488263    3848 node_conditions.go:123] node cpu capacity is 2
	I0815 16:31:58.488266    3848 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 16:31:58.488269    3848 node_conditions.go:123] node cpu capacity is 2
	I0815 16:31:58.488272    3848 node_conditions.go:105] duration metric: took 180.547852ms to run NodePressure ...
	I0815 16:31:58.488280    3848 start.go:241] waiting for startup goroutines ...
	I0815 16:31:58.488303    3848 start.go:255] writing updated cluster config ...
	I0815 16:31:58.511626    3848 out.go:201] 
	I0815 16:31:58.532028    3848 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:31:58.532166    3848 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/config.json ...
	I0815 16:31:58.553589    3848 out.go:177] * Starting "ha-138000-m04" worker node in "ha-138000" cluster
	I0815 16:31:58.594430    3848 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 16:31:58.594502    3848 cache.go:56] Caching tarball of preloaded images
	I0815 16:31:58.594676    3848 preload.go:172] Found /Users/jenkins/minikube-integration/19452-977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0815 16:31:58.594694    3848 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 16:31:58.594833    3848 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/config.json ...
	I0815 16:31:58.595712    3848 start.go:360] acquireMachinesLock for ha-138000-m04: {Name:mkac8034c8a60617271f2754a03dcaf74fc4fef7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 16:31:58.595816    3848 start.go:364] duration metric: took 79.794µs to acquireMachinesLock for "ha-138000-m04"
	I0815 16:31:58.595841    3848 start.go:96] Skipping create...Using existing machine configuration
	I0815 16:31:58.595851    3848 fix.go:54] fixHost starting: m04
	I0815 16:31:58.596274    3848 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:31:58.596311    3848 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:31:58.605762    3848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52311
	I0815 16:31:58.606137    3848 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:31:58.606475    3848 main.go:141] libmachine: Using API Version  1
	I0815 16:31:58.606484    3848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:31:58.606737    3848 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:31:58.606878    3848 main.go:141] libmachine: (ha-138000-m04) Calling .DriverName
	I0815 16:31:58.606971    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetState
	I0815 16:31:58.607059    3848 main.go:141] libmachine: (ha-138000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:31:58.607149    3848 main.go:141] libmachine: (ha-138000-m04) DBG | hyperkit pid from json: 3240
	I0815 16:31:58.608054    3848 main.go:141] libmachine: (ha-138000-m04) DBG | hyperkit pid 3240 missing from process table
	I0815 16:31:58.608090    3848 fix.go:112] recreateIfNeeded on ha-138000-m04: state=Stopped err=<nil>
	I0815 16:31:58.608101    3848 main.go:141] libmachine: (ha-138000-m04) Calling .DriverName
	W0815 16:31:58.608193    3848 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 16:31:58.629670    3848 out.go:177] * Restarting existing hyperkit VM for "ha-138000-m04" ...
	I0815 16:31:58.671397    3848 main.go:141] libmachine: (ha-138000-m04) Calling .Start
	I0815 16:31:58.671607    3848 main.go:141] libmachine: (ha-138000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:31:58.671648    3848 main.go:141] libmachine: (ha-138000-m04) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/hyperkit.pid
	I0815 16:31:58.671760    3848 main.go:141] libmachine: (ha-138000-m04) DBG | Using UUID e49817f2-f6c4-46a0-a846-8a8b2da04ea9
	I0815 16:31:58.700620    3848 main.go:141] libmachine: (ha-138000-m04) DBG | Generated MAC 66:d1:6e:6f:24:26
	I0815 16:31:58.700645    3848 main.go:141] libmachine: (ha-138000-m04) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000
	I0815 16:31:58.700779    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:58 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"e49817f2-f6c4-46a0-a846-8a8b2da04ea9", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002ad680)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0815 16:31:58.700809    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:58 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"e49817f2-f6c4-46a0-a846-8a8b2da04ea9", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002ad680)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0815 16:31:58.700889    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:58 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "e49817f2-f6c4-46a0-a846-8a8b2da04ea9", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/ha-138000-m04.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/tty,log=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/bzimage,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-13
8000-m04/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000"}
	I0815 16:31:58.700927    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:58 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U e49817f2-f6c4-46a0-a846-8a8b2da04ea9 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/ha-138000-m04.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/tty,log=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/console-ring -f kexec,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/bzimage,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/initrd,earlyprintk=serial loglevel=3 console=ttyS0 co
nsole=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000"
	I0815 16:31:58.700973    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:58 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0815 16:31:58.702332    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:58 DEBUG: hyperkit: Pid is 4201
	I0815 16:31:58.702793    3848 main.go:141] libmachine: (ha-138000-m04) DBG | Attempt 0
	I0815 16:31:58.702829    3848 main.go:141] libmachine: (ha-138000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:31:58.702904    3848 main.go:141] libmachine: (ha-138000-m04) DBG | hyperkit pid from json: 4201
	I0815 16:31:58.703953    3848 main.go:141] libmachine: (ha-138000-m04) DBG | Searching for 66:d1:6e:6f:24:26 in /var/db/dhcpd_leases ...
	I0815 16:31:58.704027    3848 main.go:141] libmachine: (ha-138000-m04) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0815 16:31:58.704048    3848 main.go:141] libmachine: (ha-138000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 16:31:58.704066    3848 main.go:141] libmachine: (ha-138000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:31:58.704081    3848 main.go:141] libmachine: (ha-138000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:31:58.704095    3848 main.go:141] libmachine: (ha-138000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be8e15}
	I0815 16:31:58.704105    3848 main.go:141] libmachine: (ha-138000-m04) DBG | Found match: 66:d1:6e:6f:24:26
	I0815 16:31:58.704118    3848 main.go:141] libmachine: (ha-138000-m04) DBG | IP: 192.169.0.8
	I0815 16:31:58.704138    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetConfigRaw
	I0815 16:31:58.704996    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetIP
	I0815 16:31:58.705244    3848 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/config.json ...
	I0815 16:31:58.705856    3848 machine.go:93] provisionDockerMachine start ...
	I0815 16:31:58.705869    3848 main.go:141] libmachine: (ha-138000-m04) Calling .DriverName
	I0815 16:31:58.705978    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHHostname
	I0815 16:31:58.706098    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHPort
	I0815 16:31:58.706206    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:31:58.706333    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:31:58.706439    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHUsername
	I0815 16:31:58.706614    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:31:58.706786    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0815 16:31:58.706796    3848 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 16:31:58.710462    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:58 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0815 16:31:58.720101    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:58 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0815 16:31:58.720991    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0815 16:31:58.721013    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0815 16:31:58.721022    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0815 16:31:58.721032    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0815 16:31:59.105309    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:59 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0815 16:31:59.105335    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:59 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0815 16:31:59.220059    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0815 16:31:59.220079    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0815 16:31:59.220089    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0815 16:31:59.220095    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0815 16:31:59.220911    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:59 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0815 16:31:59.220942    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:59 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0815 16:32:04.889008    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:32:04 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0815 16:32:04.889030    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:32:04 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0815 16:32:04.889049    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:32:04 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0815 16:32:04.912331    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:32:04 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0815 16:32:33.787060    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 16:32:33.787084    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetMachineName
	I0815 16:32:33.787215    3848 buildroot.go:166] provisioning hostname "ha-138000-m04"
	I0815 16:32:33.787226    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetMachineName
	I0815 16:32:33.787318    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHHostname
	I0815 16:32:33.787397    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHPort
	I0815 16:32:33.787483    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:33.787564    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:33.787640    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHUsername
	I0815 16:32:33.787765    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:32:33.787937    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0815 16:32:33.787945    3848 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-138000-m04 && echo "ha-138000-m04" | sudo tee /etc/hostname
	I0815 16:32:33.847992    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-138000-m04
	
	I0815 16:32:33.848008    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHHostname
	I0815 16:32:33.848137    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHPort
	I0815 16:32:33.848240    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:33.848322    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:33.848426    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHUsername
	I0815 16:32:33.848548    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:32:33.848705    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0815 16:32:33.848716    3848 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-138000-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-138000-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-138000-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 16:32:33.904813    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 16:32:33.904838    3848 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19452-977/.minikube CaCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19452-977/.minikube}
	I0815 16:32:33.904848    3848 buildroot.go:174] setting up certificates
	I0815 16:32:33.904853    3848 provision.go:84] configureAuth start
	I0815 16:32:33.904860    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetMachineName
	I0815 16:32:33.904995    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetIP
	I0815 16:32:33.905084    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHHostname
	I0815 16:32:33.905176    3848 provision.go:143] copyHostCerts
	I0815 16:32:33.905203    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem
	I0815 16:32:33.905264    3848 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem, removing ...
	I0815 16:32:33.905280    3848 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem
	I0815 16:32:33.915862    3848 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem (1082 bytes)
	I0815 16:32:33.936338    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem
	I0815 16:32:33.936399    3848 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem, removing ...
	I0815 16:32:33.936405    3848 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem
	I0815 16:32:33.960707    3848 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem (1123 bytes)
	I0815 16:32:33.961241    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem
	I0815 16:32:33.961296    3848 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem, removing ...
	I0815 16:32:33.961303    3848 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem
	I0815 16:32:33.961391    3848 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem (1679 bytes)
	I0815 16:32:33.961771    3848 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem org=jenkins.ha-138000-m04 san=[127.0.0.1 192.169.0.8 ha-138000-m04 localhost minikube]
	I0815 16:32:34.048242    3848 provision.go:177] copyRemoteCerts
	I0815 16:32:34.048297    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 16:32:34.048312    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHHostname
	I0815 16:32:34.048461    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHPort
	I0815 16:32:34.048558    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:34.048644    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHUsername
	I0815 16:32:34.048725    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/id_rsa Username:docker}
	I0815 16:32:34.079744    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0815 16:32:34.079820    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 16:32:34.099832    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0815 16:32:34.099904    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0815 16:32:34.119955    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0815 16:32:34.120035    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 16:32:34.140743    3848 provision.go:87] duration metric: took 235.600662ms to configureAuth
	I0815 16:32:34.140757    3848 buildroot.go:189] setting minikube options for container-runtime
	I0815 16:32:34.140940    3848 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:32:34.140975    3848 main.go:141] libmachine: (ha-138000-m04) Calling .DriverName
	I0815 16:32:34.141106    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHHostname
	I0815 16:32:34.141218    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHPort
	I0815 16:32:34.141307    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:34.141393    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:34.141471    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHUsername
	I0815 16:32:34.141580    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:32:34.141705    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0815 16:32:34.141713    3848 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0815 16:32:34.191590    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0815 16:32:34.191604    3848 buildroot.go:70] root file system type: tmpfs
	I0815 16:32:34.191676    3848 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0815 16:32:34.191686    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHHostname
	I0815 16:32:34.191824    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHPort
	I0815 16:32:34.191939    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:34.192031    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:34.192133    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHUsername
	I0815 16:32:34.192260    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:32:34.192405    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0815 16:32:34.192449    3848 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0815 16:32:34.253544    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	Environment=NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0815 16:32:34.253562    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHHostname
	I0815 16:32:34.253696    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHPort
	I0815 16:32:34.253789    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:34.253863    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:34.253953    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHUsername
	I0815 16:32:34.254084    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:32:34.254223    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0815 16:32:34.254235    3848 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0815 16:32:35.839568    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0815 16:32:35.839584    3848 machine.go:96] duration metric: took 37.11179722s to provisionDockerMachine
	I0815 16:32:35.839591    3848 start.go:293] postStartSetup for "ha-138000-m04" (driver="hyperkit")
	I0815 16:32:35.839597    3848 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 16:32:35.839606    3848 main.go:141] libmachine: (ha-138000-m04) Calling .DriverName
	I0815 16:32:35.839797    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 16:32:35.839811    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHHostname
	I0815 16:32:35.839906    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHPort
	I0815 16:32:35.839987    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:35.840069    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHUsername
	I0815 16:32:35.840139    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/id_rsa Username:docker}
	I0815 16:32:35.872247    3848 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 16:32:35.875358    3848 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 16:32:35.875369    3848 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19452-977/.minikube/addons for local assets ...
	I0815 16:32:35.875469    3848 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19452-977/.minikube/files for local assets ...
	I0815 16:32:35.875649    3848 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> 14982.pem in /etc/ssl/certs
	I0815 16:32:35.875656    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> /etc/ssl/certs/14982.pem
	I0815 16:32:35.875856    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 16:32:35.884005    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem --> /etc/ssl/certs/14982.pem (1708 bytes)
	I0815 16:32:35.903707    3848 start.go:296] duration metric: took 64.039683ms for postStartSetup
	I0815 16:32:35.903730    3848 main.go:141] libmachine: (ha-138000-m04) Calling .DriverName
	I0815 16:32:35.903903    3848 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0815 16:32:35.903917    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHHostname
	I0815 16:32:35.904012    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHPort
	I0815 16:32:35.904095    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:35.904168    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHUsername
	I0815 16:32:35.904243    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/id_rsa Username:docker}
	I0815 16:32:35.936201    3848 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0815 16:32:35.936261    3848 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0815 16:32:35.969821    3848 fix.go:56] duration metric: took 37.351909726s for fixHost
	I0815 16:32:35.969846    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHHostname
	I0815 16:32:35.969981    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHPort
	I0815 16:32:35.970066    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:35.970160    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:35.970248    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHUsername
	I0815 16:32:35.970357    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:32:35.970503    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0815 16:32:35.970511    3848 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 16:32:36.019594    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723764755.882644542
	
	I0815 16:32:36.019607    3848 fix.go:216] guest clock: 1723764755.882644542
	I0815 16:32:36.019612    3848 fix.go:229] Guest: 2024-08-15 16:32:35.882644542 -0700 PDT Remote: 2024-08-15 16:32:35.969836 -0700 PDT m=+161.949888378 (delta=-87.191458ms)
	I0815 16:32:36.019628    3848 fix.go:200] guest clock delta is within tolerance: -87.191458ms
	I0815 16:32:36.019633    3848 start.go:83] releasing machines lock for "ha-138000-m04", held for 37.401695552s
	I0815 16:32:36.019652    3848 main.go:141] libmachine: (ha-138000-m04) Calling .DriverName
	I0815 16:32:36.019780    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetIP
	I0815 16:32:36.042030    3848 out.go:177] * Found network options:
	I0815 16:32:36.062147    3848 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	W0815 16:32:36.083026    3848 proxy.go:119] fail to check proxy env: Error ip not in block
	W0815 16:32:36.083070    3848 proxy.go:119] fail to check proxy env: Error ip not in block
	W0815 16:32:36.083084    3848 proxy.go:119] fail to check proxy env: Error ip not in block
	I0815 16:32:36.083102    3848 main.go:141] libmachine: (ha-138000-m04) Calling .DriverName
	I0815 16:32:36.083847    3848 main.go:141] libmachine: (ha-138000-m04) Calling .DriverName
	I0815 16:32:36.084058    3848 main.go:141] libmachine: (ha-138000-m04) Calling .DriverName
	I0815 16:32:36.084240    3848 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 16:32:36.084283    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHHostname
	W0815 16:32:36.084353    3848 proxy.go:119] fail to check proxy env: Error ip not in block
	W0815 16:32:36.084375    3848 proxy.go:119] fail to check proxy env: Error ip not in block
	W0815 16:32:36.084394    3848 proxy.go:119] fail to check proxy env: Error ip not in block
	I0815 16:32:36.084487    3848 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0815 16:32:36.084508    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHPort
	I0815 16:32:36.084519    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHHostname
	I0815 16:32:36.084733    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:36.084745    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHPort
	I0815 16:32:36.084957    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHUsername
	I0815 16:32:36.084992    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:36.085156    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHUsername
	I0815 16:32:36.085189    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/id_rsa Username:docker}
	I0815 16:32:36.085315    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/id_rsa Username:docker}
	W0815 16:32:36.114740    3848 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 16:32:36.114803    3848 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 16:32:36.163124    3848 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 16:32:36.163145    3848 start.go:495] detecting cgroup driver to use...
	I0815 16:32:36.163258    3848 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 16:32:36.179534    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0815 16:32:36.187872    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0815 16:32:36.196474    3848 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0815 16:32:36.196528    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0815 16:32:36.204752    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 16:32:36.212948    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0815 16:32:36.221222    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 16:32:36.229511    3848 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 16:32:36.238142    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0815 16:32:36.246643    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0815 16:32:36.254862    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0815 16:32:36.263281    3848 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 16:32:36.270596    3848 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 16:32:36.278325    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:32:36.377803    3848 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0815 16:32:36.396329    3848 start.go:495] detecting cgroup driver to use...
	I0815 16:32:36.396399    3848 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0815 16:32:36.411192    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 16:32:36.423875    3848 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 16:32:36.437859    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 16:32:36.449142    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0815 16:32:36.460191    3848 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0815 16:32:36.479331    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0815 16:32:36.491179    3848 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 16:32:36.506341    3848 ssh_runner.go:195] Run: which cri-dockerd
	I0815 16:32:36.509156    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0815 16:32:36.517306    3848 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0815 16:32:36.530887    3848 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0815 16:32:36.631226    3848 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0815 16:32:36.742723    3848 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0815 16:32:36.742750    3848 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0815 16:32:36.756569    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:32:36.851332    3848 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0815 16:32:39.062024    3848 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.208594053s)
	I0815 16:32:39.062086    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0815 16:32:39.072858    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0815 16:32:39.083135    3848 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0815 16:32:39.180174    3848 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0815 16:32:39.296201    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:32:39.397264    3848 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0815 16:32:39.409768    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0815 16:32:39.419919    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:32:39.520172    3848 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0815 16:32:39.580712    3848 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0815 16:32:39.580787    3848 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0815 16:32:39.585172    3848 start.go:563] Will wait 60s for crictl version
	I0815 16:32:39.585233    3848 ssh_runner.go:195] Run: which crictl
	I0815 16:32:39.588436    3848 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 16:32:39.616400    3848 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0815 16:32:39.616480    3848 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0815 16:32:39.635416    3848 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0815 16:32:39.674509    3848 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0815 16:32:39.715170    3848 out.go:177]   - env NO_PROXY=192.169.0.5
	I0815 16:32:39.736207    3848 out.go:177]   - env NO_PROXY=192.169.0.5,192.169.0.6
	I0815 16:32:39.756990    3848 out.go:177]   - env NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	I0815 16:32:39.778125    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetIP
	I0815 16:32:39.778383    3848 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0815 16:32:39.781735    3848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 16:32:39.792335    3848 mustload.go:65] Loading cluster: ha-138000
	I0815 16:32:39.792518    3848 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:32:39.792754    3848 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:32:39.792777    3848 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:32:39.801573    3848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52333
	I0815 16:32:39.801892    3848 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:32:39.802227    3848 main.go:141] libmachine: Using API Version  1
	I0815 16:32:39.802235    3848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:32:39.802431    3848 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:32:39.802539    3848 main.go:141] libmachine: (ha-138000) Calling .GetState
	I0815 16:32:39.802617    3848 main.go:141] libmachine: (ha-138000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:32:39.802698    3848 main.go:141] libmachine: (ha-138000) DBG | hyperkit pid from json: 3862
	I0815 16:32:39.803669    3848 host.go:66] Checking if "ha-138000" exists ...
	I0815 16:32:39.803925    3848 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:32:39.803948    3848 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:32:39.812411    3848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52335
	I0815 16:32:39.812752    3848 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:32:39.813108    3848 main.go:141] libmachine: Using API Version  1
	I0815 16:32:39.813119    3848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:32:39.813352    3848 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:32:39.813479    3848 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:32:39.813578    3848 certs.go:68] Setting up /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000 for IP: 192.169.0.8
	I0815 16:32:39.813584    3848 certs.go:194] generating shared ca certs ...
	I0815 16:32:39.813595    3848 certs.go:226] acquiring lock for ca certs: {Name:mka1c88019f6064d2983ac988db71a67aeb65696 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:32:39.813775    3848 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.key
	I0815 16:32:39.813853    3848 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.key
	I0815 16:32:39.813863    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0815 16:32:39.813888    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0815 16:32:39.813907    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0815 16:32:39.813924    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0815 16:32:39.814032    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498.pem (1338 bytes)
	W0815 16:32:39.814088    3848 certs.go:480] ignoring /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498_empty.pem, impossibly tiny 0 bytes
	I0815 16:32:39.814098    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 16:32:39.814142    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem (1082 bytes)
	I0815 16:32:39.814184    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem (1123 bytes)
	I0815 16:32:39.814213    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem (1679 bytes)
	I0815 16:32:39.814289    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem (1708 bytes)
	I0815 16:32:39.814324    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:32:39.814344    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498.pem -> /usr/share/ca-certificates/1498.pem
	I0815 16:32:39.814362    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> /usr/share/ca-certificates/14982.pem
	I0815 16:32:39.814393    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 16:32:39.834330    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 16:32:39.854069    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 16:32:39.873582    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 16:32:39.893143    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 16:32:39.912645    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498.pem --> /usr/share/ca-certificates/1498.pem (1338 bytes)
	I0815 16:32:39.932104    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem --> /usr/share/ca-certificates/14982.pem (1708 bytes)
	I0815 16:32:39.951872    3848 ssh_runner.go:195] Run: openssl version
	I0815 16:32:39.956296    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 16:32:39.966055    3848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:32:39.970287    3848 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:05 /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:32:39.970366    3848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:32:39.974984    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 16:32:39.984513    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1498.pem && ln -fs /usr/share/ca-certificates/1498.pem /etc/ssl/certs/1498.pem"
	I0815 16:32:39.994098    3848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1498.pem
	I0815 16:32:39.997571    3848 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 23:14 /usr/share/ca-certificates/1498.pem
	I0815 16:32:39.997641    3848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1498.pem
	I0815 16:32:40.002092    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1498.pem /etc/ssl/certs/51391683.0"
	I0815 16:32:40.011802    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14982.pem && ln -fs /usr/share/ca-certificates/14982.pem /etc/ssl/certs/14982.pem"
	I0815 16:32:40.021159    3848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14982.pem
	I0815 16:32:40.024904    3848 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 23:14 /usr/share/ca-certificates/14982.pem
	I0815 16:32:40.024948    3848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14982.pem
	I0815 16:32:40.029236    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14982.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 16:32:40.038952    3848 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 16:32:40.042186    3848 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0815 16:32:40.042220    3848 kubeadm.go:934] updating node {m04 192.169.0.8 0 v1.31.0 docker false true} ...
	I0815 16:32:40.042279    3848 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-138000-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.8
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-138000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 16:32:40.042327    3848 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 16:32:40.050823    3848 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 16:32:40.050877    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0815 16:32:40.059254    3848 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0815 16:32:40.072800    3848 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 16:32:40.086506    3848 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0815 16:32:40.089484    3848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 16:32:40.099835    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:32:40.204428    3848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 16:32:40.219160    3848 start.go:235] Will wait 6m0s for node &{Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0815 16:32:40.219362    3848 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:32:40.240563    3848 out.go:177] * Verifying Kubernetes components...
	I0815 16:32:40.281239    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:32:40.407726    3848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 16:32:40.424517    3848 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19452-977/kubeconfig
	I0815 16:32:40.424746    3848 kapi.go:59] client config for ha-138000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/client.key", CAFile:"/Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, U
serAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x3a6cf60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0815 16:32:40.424790    3848 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0815 16:32:40.424946    3848 node_ready.go:35] waiting up to 6m0s for node "ha-138000-m04" to be "Ready" ...
	I0815 16:32:40.424985    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m04
	I0815 16:32:40.424990    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:40.424997    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:40.425001    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:40.429695    3848 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 16:32:40.925699    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m04
	I0815 16:32:40.925718    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:40.925730    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:40.925735    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:40.928643    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:32:40.929158    3848 node_ready.go:49] node "ha-138000-m04" has status "Ready":"True"
	I0815 16:32:40.929170    3848 node_ready.go:38] duration metric: took 503.811986ms for node "ha-138000-m04" to be "Ready" ...
	I0815 16:32:40.929177    3848 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 16:32:40.929232    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0815 16:32:40.929240    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:40.929248    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:40.929253    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:40.932889    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:40.938534    3848 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-dmgt5" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:40.938586    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-dmgt5
	I0815 16:32:40.938591    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:40.938597    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:40.938601    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:40.940630    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:32:40.941135    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:32:40.941143    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:40.941149    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:40.941155    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:40.943092    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:32:40.943437    3848 pod_ready.go:93] pod "coredns-6f6b679f8f-dmgt5" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:40.943446    3848 pod_ready.go:82] duration metric: took 4.897461ms for pod "coredns-6f6b679f8f-dmgt5" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:40.943453    3848 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-zc8jj" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:40.943484    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-zc8jj
	I0815 16:32:40.943489    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:40.943495    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:40.943498    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:40.945206    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:32:40.945690    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:32:40.945697    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:40.945703    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:40.945706    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:40.947257    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:32:40.947557    3848 pod_ready.go:93] pod "coredns-6f6b679f8f-zc8jj" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:40.947566    3848 pod_ready.go:82] duration metric: took 4.10464ms for pod "coredns-6f6b679f8f-zc8jj" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:40.947580    3848 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:40.947611    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000
	I0815 16:32:40.947616    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:40.947622    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:40.947625    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:40.949227    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:32:40.949563    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:32:40.949570    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:40.949576    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:40.949579    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:40.951175    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:32:40.951528    3848 pod_ready.go:93] pod "etcd-ha-138000" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:40.951537    3848 pod_ready.go:82] duration metric: took 3.9487ms for pod "etcd-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:40.951543    3848 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:40.951576    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m02
	I0815 16:32:40.951581    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:40.951587    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:40.951590    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:40.953480    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:32:40.953888    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:32:40.953896    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:40.953902    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:40.953906    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:40.956234    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:32:40.956704    3848 pod_ready.go:93] pod "etcd-ha-138000-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:40.956713    3848 pod_ready.go:82] duration metric: took 5.161406ms for pod "etcd-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:40.956719    3848 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:41.126239    3848 request.go:632] Waited for 169.295221ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:32:41.126310    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:32:41.126326    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:41.126342    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:41.126348    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:41.129984    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:41.327227    3848 request.go:632] Waited for 196.482674ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:32:41.327282    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:32:41.327327    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:41.327340    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:41.327346    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:41.330300    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:32:41.330659    3848 pod_ready.go:93] pod "etcd-ha-138000-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:41.330669    3848 pod_ready.go:82] duration metric: took 373.660924ms for pod "etcd-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:41.330681    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:41.526448    3848 request.go:632] Waited for 195.583591ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000
	I0815 16:32:41.526543    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000
	I0815 16:32:41.526554    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:41.526567    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:41.526577    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:41.532016    3848 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 16:32:41.726373    3848 request.go:632] Waited for 193.637616ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:32:41.726406    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:32:41.726411    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:41.726417    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:41.726421    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:41.728634    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:32:41.729100    3848 pod_ready.go:93] pod "kube-apiserver-ha-138000" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:41.729111    3848 pod_ready.go:82] duration metric: took 398.123683ms for pod "kube-apiserver-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:41.729118    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:41.926911    3848 request.go:632] Waited for 197.603818ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:32:41.927000    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:32:41.927007    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:41.927013    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:41.927017    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:41.929844    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:32:42.128208    3848 request.go:632] Waited for 197.600405ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:32:42.128281    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:32:42.128287    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:42.128294    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:42.128297    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:42.130511    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:32:42.130893    3848 pod_ready.go:93] pod "kube-apiserver-ha-138000-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:42.130903    3848 pod_ready.go:82] duration metric: took 401.488989ms for pod "kube-apiserver-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:42.130910    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:42.326992    3848 request.go:632] Waited for 195.89771ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m03
	I0815 16:32:42.327104    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m03
	I0815 16:32:42.327117    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:42.327128    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:42.327133    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:42.330012    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:32:42.528721    3848 request.go:632] Waited for 197.972621ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:32:42.528810    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:32:42.528823    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:42.528832    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:42.528839    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:42.531660    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:32:42.532014    3848 pod_ready.go:93] pod "kube-apiserver-ha-138000-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:42.532023    3848 pod_ready.go:82] duration metric: took 400.824225ms for pod "kube-apiserver-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:42.532031    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:42.728571    3848 request.go:632] Waited for 196.361424ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:32:42.728605    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:32:42.728614    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:42.728647    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:42.728651    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:42.731003    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:32:42.928382    3848 request.go:632] Waited for 196.815945ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:32:42.928456    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:32:42.928464    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:42.928472    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:42.928479    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:42.930971    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:32:42.931316    3848 pod_ready.go:93] pod "kube-controller-manager-ha-138000" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:42.931325    3848 pod_ready.go:82] duration metric: took 399.007322ms for pod "kube-controller-manager-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:42.931332    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:43.127763    3848 request.go:632] Waited for 196.250954ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000-m02
	I0815 16:32:43.127817    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000-m02
	I0815 16:32:43.127830    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:43.127894    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:43.127907    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:43.131065    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:43.327999    3848 request.go:632] Waited for 196.235394ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:32:43.328052    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:32:43.328063    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:43.328073    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:43.328081    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:43.331302    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:43.331997    3848 pod_ready.go:93] pod "kube-controller-manager-ha-138000-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:43.332007    3848 pod_ready.go:82] duration metric: took 400.403262ms for pod "kube-controller-manager-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:43.332014    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:43.527716    3848 request.go:632] Waited for 195.527377ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000-m03
	I0815 16:32:43.527817    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000-m03
	I0815 16:32:43.527829    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:43.527841    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:43.527847    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:43.530965    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:43.728236    3848 request.go:632] Waited for 196.484633ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:32:43.728298    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:32:43.728309    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:43.728320    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:43.728328    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:43.731883    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:43.732469    3848 pod_ready.go:93] pod "kube-controller-manager-ha-138000-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:43.732478    3848 pod_ready.go:82] duration metric: took 400.192656ms for pod "kube-controller-manager-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:43.732484    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cznkn" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:43.928265    3848 request.go:632] Waited for 195.61986ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cznkn
	I0815 16:32:43.928325    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cznkn
	I0815 16:32:43.928331    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:43.928337    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:43.928341    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:43.930546    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:32:44.128606    3848 request.go:632] Waited for 197.39717ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:32:44.128669    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:32:44.128682    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:44.128693    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:44.128702    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:44.132274    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:44.132835    3848 pod_ready.go:93] pod "kube-proxy-cznkn" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:44.132847    3848 pod_ready.go:82] duration metric: took 400.10235ms for pod "kube-proxy-cznkn" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:44.132856    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kxghx" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:44.328927    3848 request.go:632] Waited for 195.898781ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kxghx
	I0815 16:32:44.328980    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kxghx
	I0815 16:32:44.328988    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:44.328997    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:44.329003    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:44.332425    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:44.528721    3848 request.go:632] Waited for 195.542417ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:32:44.528856    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:32:44.528867    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:44.528878    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:44.528884    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:44.532391    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:44.532921    3848 pod_ready.go:93] pod "kube-proxy-kxghx" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:44.532933    3848 pod_ready.go:82] duration metric: took 399.821933ms for pod "kube-proxy-kxghx" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:44.532943    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qpth7" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:44.729675    3848 request.go:632] Waited for 196.549445ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qpth7
	I0815 16:32:44.729804    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qpth7
	I0815 16:32:44.729823    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:44.729835    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:44.729845    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:44.733406    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:44.929790    3848 request.go:632] Waited for 195.811353ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m04
	I0815 16:32:44.929844    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m04
	I0815 16:32:44.929899    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:44.929913    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:44.929919    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:44.933124    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:44.933608    3848 pod_ready.go:93] pod "kube-proxy-qpth7" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:44.933620    3848 pod_ready.go:82] duration metric: took 400.423483ms for pod "kube-proxy-qpth7" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:44.933628    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tf79g" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:45.129188    3848 request.go:632] Waited for 195.397689ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tf79g
	I0815 16:32:45.129249    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tf79g
	I0815 16:32:45.129265    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:45.129278    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:45.129288    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:45.132523    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:45.329740    3848 request.go:632] Waited for 196.543831ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:32:45.329842    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:32:45.329853    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:45.329864    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:45.329893    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:45.332959    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:45.333655    3848 pod_ready.go:93] pod "kube-proxy-tf79g" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:45.333668    3848 pod_ready.go:82] duration metric: took 399.799233ms for pod "kube-proxy-tf79g" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:45.333677    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:45.528959    3848 request.go:632] Waited for 195.085989ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000
	I0815 16:32:45.528999    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000
	I0815 16:32:45.529004    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:45.529011    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:45.529014    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:45.531204    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:32:45.730380    3848 request.go:632] Waited for 198.71096ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:32:45.730470    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:32:45.730488    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:45.730540    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:45.730549    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:45.733632    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:45.734206    3848 pod_ready.go:93] pod "kube-scheduler-ha-138000" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:45.734218    3848 pod_ready.go:82] duration metric: took 400.300105ms for pod "kube-scheduler-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:45.734227    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:45.929618    3848 request.go:632] Waited for 195.186999ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000-m02
	I0815 16:32:45.929667    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000-m02
	I0815 16:32:45.929676    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:45.929687    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:45.929695    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:45.933262    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:46.130161    3848 request.go:632] Waited for 196.149607ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:32:46.130227    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:32:46.130233    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:46.130239    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:46.130243    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:46.132556    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:32:46.132872    3848 pod_ready.go:93] pod "kube-scheduler-ha-138000-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:46.132882    3848 pod_ready.go:82] duration metric: took 398.424946ms for pod "kube-scheduler-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:46.132892    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:46.330062    3848 request.go:632] Waited for 196.982598ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000-m03
	I0815 16:32:46.330155    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000-m03
	I0815 16:32:46.330165    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:46.330189    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:46.330198    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:46.333748    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:46.529626    3848 request.go:632] Waited for 195.297916ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:32:46.529687    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:32:46.529698    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:46.529709    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:46.529716    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:46.532896    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:46.533425    3848 pod_ready.go:93] pod "kube-scheduler-ha-138000-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:46.533437    3848 pod_ready.go:82] duration metric: took 400.316472ms for pod "kube-scheduler-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:46.533445    3848 pod_ready.go:39] duration metric: took 5.600601602s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 16:32:46.533458    3848 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 16:32:46.533512    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 16:32:46.545338    3848 system_svc.go:56] duration metric: took 11.868784ms WaitForService to wait for kubelet
	I0815 16:32:46.545353    3848 kubeadm.go:582] duration metric: took 6.321930293s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 16:32:46.545367    3848 node_conditions.go:102] verifying NodePressure condition ...
	I0815 16:32:46.729678    3848 request.go:632] Waited for 184.161888ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0815 16:32:46.729775    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0815 16:32:46.729791    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:46.729803    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:46.729814    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:46.733356    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:46.734408    3848 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 16:32:46.734417    3848 node_conditions.go:123] node cpu capacity is 2
	I0815 16:32:46.734438    3848 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 16:32:46.734446    3848 node_conditions.go:123] node cpu capacity is 2
	I0815 16:32:46.734451    3848 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 16:32:46.734454    3848 node_conditions.go:123] node cpu capacity is 2
	I0815 16:32:46.734459    3848 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 16:32:46.734463    3848 node_conditions.go:123] node cpu capacity is 2
	I0815 16:32:46.734466    3848 node_conditions.go:105] duration metric: took 188.991963ms to run NodePressure ...
	I0815 16:32:46.734473    3848 start.go:241] waiting for startup goroutines ...
	I0815 16:32:46.734487    3848 start.go:255] writing updated cluster config ...
	I0815 16:32:46.734849    3848 ssh_runner.go:195] Run: rm -f paused
	I0815 16:32:46.777324    3848 start.go:600] kubectl: 1.29.2, cluster: 1.31.0 (minor skew: 2)
	I0815 16:32:46.799308    3848 out.go:201] 
	W0815 16:32:46.820067    3848 out.go:270] ! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.0.
	I0815 16:32:46.840863    3848 out.go:177]   - Want kubectl v1.31.0? Try 'minikube kubectl -- get pods -A'
	I0815 16:32:46.862128    3848 out.go:177] * Done! kubectl is now configured to use "ha-138000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Aug 15 23:30:58 ha-138000 dockerd[1153]: time="2024-08-15T23:30:58.911495531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:30:58 ha-138000 dockerd[1153]: time="2024-08-15T23:30:58.913627850Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 15 23:30:58 ha-138000 dockerd[1153]: time="2024-08-15T23:30:58.913666039Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 15 23:30:58 ha-138000 dockerd[1153]: time="2024-08-15T23:30:58.913677629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:30:58 ha-138000 dockerd[1153]: time="2024-08-15T23:30:58.913771765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:30:58 ha-138000 dockerd[1153]: time="2024-08-15T23:30:58.917066694Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 15 23:30:58 ha-138000 dockerd[1153]: time="2024-08-15T23:30:58.917195390Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 15 23:30:58 ha-138000 dockerd[1153]: time="2024-08-15T23:30:58.917208298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:30:58 ha-138000 dockerd[1153]: time="2024-08-15T23:30:58.917385910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:30:59 ha-138000 dockerd[1153]: time="2024-08-15T23:30:59.886428053Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 15 23:30:59 ha-138000 dockerd[1153]: time="2024-08-15T23:30:59.886532806Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 15 23:30:59 ha-138000 dockerd[1153]: time="2024-08-15T23:30:59.886546833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:30:59 ha-138000 dockerd[1153]: time="2024-08-15T23:30:59.886748891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:31:00 ha-138000 dockerd[1153]: time="2024-08-15T23:31:00.892633352Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 15 23:31:00 ha-138000 dockerd[1153]: time="2024-08-15T23:31:00.893116347Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 15 23:31:00 ha-138000 dockerd[1153]: time="2024-08-15T23:31:00.893221469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:31:00 ha-138000 dockerd[1153]: time="2024-08-15T23:31:00.893411350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:31:01 ha-138000 dockerd[1153]: time="2024-08-15T23:31:01.876748430Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 15 23:31:01 ha-138000 dockerd[1153]: time="2024-08-15T23:31:01.876814366Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 15 23:31:01 ha-138000 dockerd[1153]: time="2024-08-15T23:31:01.876834716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:31:01 ha-138000 dockerd[1153]: time="2024-08-15T23:31:01.876961405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:31:03 ha-138000 dockerd[1153]: time="2024-08-15T23:31:03.874516614Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 15 23:31:03 ha-138000 dockerd[1153]: time="2024-08-15T23:31:03.874614005Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 15 23:31:03 ha-138000 dockerd[1153]: time="2024-08-15T23:31:03.874643416Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:31:03 ha-138000 dockerd[1153]: time="2024-08-15T23:31:03.874757663Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	f4a0ec142726f       045733566833c                                                                                         About a minute ago   Running             kube-controller-manager   7                   787273cdcffa4       kube-controller-manager-ha-138000
	9b4d9e684266a       cbb01a7bd410d                                                                                         About a minute ago   Running             coredns                   1                   e616bc4c74358       coredns-6f6b679f8f-dmgt5
	80f5762ff7596       8c811b4aec35f                                                                                         About a minute ago   Running             busybox                   1                   67d12a31b7b49       busybox-7dff88458-wgww9
	fea7f52d9a276       6e38f40d628db                                                                                         About a minute ago   Running             storage-provisioner       1                   b65d03e28df57       storage-provisioner
	a06770ea62d50       cbb01a7bd410d                                                                                         About a minute ago   Running             coredns                   1                   730316cfbee9c       coredns-6f6b679f8f-zc8jj
	3102e608c7d69       ad83b2ca7b09e                                                                                         About a minute ago   Running             kube-proxy                1                   824e79b38bfeb       kube-proxy-cznkn
	d35ee43272703       12968670680f4                                                                                         About a minute ago   Running             kindnet-cni               1                   28b2ff94764c2       kindnet-77dc6
	67b207257b40d       2e96e5913fc06                                                                                         2 minutes ago        Running             etcd                      3                   5fbdeb5e7a6b9       etcd-ha-138000
	c2ddb52a9846f       1766f54c897f0                                                                                         2 minutes ago        Running             kube-scheduler            2                   d5e3465359549       kube-scheduler-ha-138000
	2d2c6da6f7b74       38af8ddebf499                                                                                         2 minutes ago        Running             kube-vip                  1                   2bb58ad8c8f10       kube-vip-ha-138000
	2ed9ae0427266       045733566833c                                                                                         2 minutes ago        Exited              kube-controller-manager   6                   787273cdcffa4       kube-controller-manager-ha-138000
	a6baf6e21d6c9       604f5db92eaa8                                                                                         2 minutes ago        Running             kube-apiserver            6                   0de6d71d60938       kube-apiserver-ha-138000
	5ed11c46e0eb7       604f5db92eaa8                                                                                         3 minutes ago        Exited              kube-apiserver            5                   7152268f8eec4       kube-apiserver-ha-138000
	59dac0b44544a       2e96e5913fc06                                                                                         3 minutes ago        Exited              etcd                      2                   ec285d4826baa       etcd-ha-138000
	efbc09be8eda5       38af8ddebf499                                                                                         7 minutes ago        Exited              kube-vip                  0                   0c665afd15e6f       kube-vip-ha-138000
	ac6935271595c       1766f54c897f0                                                                                         7 minutes ago        Exited              kube-scheduler            1                   07c1c62e41d3a       kube-scheduler-ha-138000
	8f20284cd3969       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   10 minutes ago       Exited              busybox                   0                   bfc975a528b9e       busybox-7dff88458-wgww9
	42f5d82b00417       cbb01a7bd410d                                                                                         13 minutes ago       Exited              coredns                   0                   10891f8fbffcc       coredns-6f6b679f8f-dmgt5
	3e8b806ef4f33       cbb01a7bd410d                                                                                         13 minutes ago       Exited              coredns                   0                   096ab15603b01       coredns-6f6b679f8f-zc8jj
	6a1122913bb18       6e38f40d628db                                                                                         13 minutes ago       Exited              storage-provisioner       0                   e30dde4a5a10d       storage-provisioner
	c2a16126718b3       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              13 minutes ago       Exited              kindnet-cni               0                   e260a94a203af       kindnet-77dc6
	fc2e141007efb       ad83b2ca7b09e                                                                                         13 minutes ago       Exited              kube-proxy                0                   5b40cdd6b2c24       kube-proxy-cznkn
	
	
	==> coredns [3e8b806ef4f3] <==
	[INFO] 10.244.2.2:44773 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000075522s
	[INFO] 10.244.2.2:53805 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000098349s
	[INFO] 10.244.2.2:34369 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000122495s
	[INFO] 10.244.0.4:59671 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000077646s
	[INFO] 10.244.0.4:41185 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000079139s
	[INFO] 10.244.0.4:42405 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000092065s
	[INFO] 10.244.0.4:54373 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000049998s
	[INFO] 10.244.0.4:57169 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000050383s
	[INFO] 10.244.0.4:37825 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000085108s
	[INFO] 10.244.1.2:59685 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000072268s
	[INFO] 10.244.1.2:32923 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073054s
	[INFO] 10.244.2.2:50876 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000068102s
	[INFO] 10.244.2.2:54719 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000762s
	[INFO] 10.244.0.4:57395 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000091608s
	[INFO] 10.244.0.4:37936 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000031052s
	[INFO] 10.244.1.2:58408 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000088888s
	[INFO] 10.244.1.2:42731 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000114857s
	[INFO] 10.244.1.2:41638 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000082664s
	[INFO] 10.244.2.2:52666 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000092331s
	[INFO] 10.244.2.2:41501 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000093116s
	[INFO] 10.244.0.4:48200 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000075447s
	[INFO] 10.244.0.4:35056 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000091854s
	[INFO] 10.244.0.4:36257 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000057922s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [42f5d82b0041] <==
	[INFO] 10.244.1.2:50104 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.009876264s
	[INFO] 10.244.0.4:33653 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115506s
	[INFO] 10.244.0.4:45180 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000042438s
	[INFO] 10.244.1.2:60312 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000068925s
	[INFO] 10.244.1.2:38521 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000124425s
	[INFO] 10.244.1.2:51675 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000125646s
	[INFO] 10.244.1.2:33974 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000078827s
	[INFO] 10.244.2.2:38966 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000078816s
	[INFO] 10.244.2.2:56056 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000620092s
	[INFO] 10.244.2.2:32787 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000109221s
	[INFO] 10.244.2.2:55701 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000039601s
	[INFO] 10.244.0.4:52543 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000083971s
	[INFO] 10.244.0.4:55050 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000146353s
	[INFO] 10.244.1.2:52165 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100415s
	[INFO] 10.244.1.2:41123 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000060755s
	[INFO] 10.244.2.2:56460 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000087503s
	[INFO] 10.244.2.2:36407 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00009778s
	[INFO] 10.244.0.4:40764 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000037536s
	[INFO] 10.244.0.4:58473 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000029335s
	[INFO] 10.244.1.2:38640 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000118481s
	[INFO] 10.244.2.2:46151 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000117088s
	[INFO] 10.244.2.2:34054 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000108858s
	[INFO] 10.244.0.4:56735 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000069666s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9b4d9e684266] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:35767 - 22561 "HINFO IN 7004530829965964013.1750022571380345519. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.015451267s
	
	
	==> coredns [a06770ea62d5] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:45363 - 12851 "HINFO IN 3106403090745602942.3481725171230015744. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.010450605s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[254954895]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (15-Aug-2024 23:30:59.263) (total time: 30001ms):
	Trace[254954895]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (23:31:29.264)
	Trace[254954895]: [30.001669104s] [30.001669104s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1581349608]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (15-Aug-2024 23:30:59.262) (total time: 30003ms):
	Trace[1581349608]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (23:31:29.264)
	Trace[1581349608]: [30.003336626s] [30.003336626s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[405473182]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (15-Aug-2024 23:30:59.265) (total time: 30001ms):
	Trace[405473182]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (23:31:29.266)
	Trace[405473182]: [30.001211712s] [30.001211712s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               ha-138000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-138000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774
	                    minikube.k8s.io/name=ha-138000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_15T16_19_29_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 23:19:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-138000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 23:32:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 23:30:42 +0000   Thu, 15 Aug 2024 23:19:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 23:30:42 +0000   Thu, 15 Aug 2024 23:19:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 23:30:42 +0000   Thu, 15 Aug 2024 23:19:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 23:30:42 +0000   Thu, 15 Aug 2024 23:30:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-138000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 92a77083c2c148ceb3a6c27974611a44
	  System UUID:                bf1b4c04-0000-0000-a028-0dd0a6dcd337
	  Boot ID:                    0c496489-3552-4f3e-814f-62743ebab1dd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-wgww9              0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-6f6b679f8f-dmgt5             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 coredns-6f6b679f8f-zc8jj             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 etcd-ha-138000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         13m
	  kube-system                 kindnet-77dc6                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-apiserver-ha-138000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-138000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-cznkn                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-138000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-138000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 13m                    kube-proxy       
	  Normal  Starting                 114s                   kube-proxy       
	  Normal  Starting                 13m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m                    kubelet          Node ha-138000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                    kubelet          Node ha-138000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                    kubelet          Node ha-138000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                    node-controller  Node ha-138000 event: Registered Node ha-138000 in Controller
	  Normal  NodeReady                13m                    kubelet          Node ha-138000 status is now: NodeReady
	  Normal  RegisteredNode           12m                    node-controller  Node ha-138000 event: Registered Node ha-138000 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-138000 event: Registered Node ha-138000 in Controller
	  Normal  RegisteredNode           9m2s                   node-controller  Node ha-138000 event: Registered Node ha-138000 in Controller
	  Normal  NodeHasSufficientMemory  2m41s (x8 over 2m41s)  kubelet          Node ha-138000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m41s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    2m41s (x8 over 2m41s)  kubelet          Node ha-138000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m41s (x7 over 2m41s)  kubelet          Node ha-138000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m41s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m9s                   node-controller  Node ha-138000 event: Registered Node ha-138000 in Controller
	  Normal  RegisteredNode           107s                   node-controller  Node ha-138000 event: Registered Node ha-138000 in Controller
	  Normal  RegisteredNode           69s                    node-controller  Node ha-138000 event: Registered Node ha-138000 in Controller
	
	
	Name:               ha-138000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-138000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774
	                    minikube.k8s.io/name=ha-138000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_15T16_20_25_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 23:20:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-138000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 23:32:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 23:30:40 +0000   Thu, 15 Aug 2024 23:20:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 23:30:40 +0000   Thu, 15 Aug 2024 23:20:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 23:30:40 +0000   Thu, 15 Aug 2024 23:20:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 23:30:40 +0000   Thu, 15 Aug 2024 23:20:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.6
	  Hostname:    ha-138000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 f9fb9b8d5e3646d78c1f55449a26b188
	  System UUID:                4cff4215-0000-0000-9139-05f05b79bce3
	  Boot ID:                    26a8e1bf-75d0-4caa-b86c-d0e6f8c9e474
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-s6zqd                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-138000-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         12m
	  kube-system                 kindnet-z6mnx                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-apiserver-ha-138000-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-138000-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-tf79g                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-138000-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-138000-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m12s                  kube-proxy       
	  Normal   Starting                 9m6s                   kube-proxy       
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-138000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-138000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node ha-138000-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                    node-controller  Node ha-138000-m02 event: Registered Node ha-138000-m02 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-138000-m02 event: Registered Node ha-138000-m02 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-138000-m02 event: Registered Node ha-138000-m02 in Controller
	  Normal   Starting                 9m11s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9m11s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 9m10s                  kubelet          Node ha-138000-m02 has been rebooted, boot id: 8d4ef345-e3b6-437d-95f7-338233576a37
	  Normal   NodeHasSufficientMemory  9m10s                  kubelet          Node ha-138000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m10s                  kubelet          Node ha-138000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m10s                  kubelet          Node ha-138000-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           9m2s                   node-controller  Node ha-138000-m02 event: Registered Node ha-138000-m02 in Controller
	  Normal   Starting                 2m22s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m21s (x8 over 2m22s)  kubelet          Node ha-138000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m21s (x8 over 2m22s)  kubelet          Node ha-138000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m21s (x7 over 2m22s)  kubelet          Node ha-138000-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m9s                   node-controller  Node ha-138000-m02 event: Registered Node ha-138000-m02 in Controller
	  Normal   RegisteredNode           107s                   node-controller  Node ha-138000-m02 event: Registered Node ha-138000-m02 in Controller
	  Normal   RegisteredNode           69s                    node-controller  Node ha-138000-m02 event: Registered Node ha-138000-m02 in Controller
	
	
	Name:               ha-138000-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-138000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774
	                    minikube.k8s.io/name=ha-138000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_15T16_21_41_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 23:21:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-138000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 23:32:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 23:31:37 +0000   Thu, 15 Aug 2024 23:31:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 23:31:37 +0000   Thu, 15 Aug 2024 23:31:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 23:31:37 +0000   Thu, 15 Aug 2024 23:31:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 23:31:37 +0000   Thu, 15 Aug 2024 23:31:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.7
	  Hostname:    ha-138000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 a589cb93968b432caa5fc365bb995740
	  System UUID:                42284b8b-0000-0000-ac7c-129bf380703a
	  Boot ID:                    3cf0bc98-5f0e-4a33-80fb-e0c2d84cf3db
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-t5sdh                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-138000-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         11m
	  kube-system                 kindnet-dsvxt                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      11m
	  kube-system                 kube-apiserver-ha-138000-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-138000-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-kxghx                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-138000-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-138000-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 73s                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-138000-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-138000-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-138000-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                node-controller  Node ha-138000-m03 event: Registered Node ha-138000-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-138000-m03 event: Registered Node ha-138000-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-138000-m03 event: Registered Node ha-138000-m03 in Controller
	  Normal   RegisteredNode           9m2s               node-controller  Node ha-138000-m03 event: Registered Node ha-138000-m03 in Controller
	  Normal   RegisteredNode           2m9s               node-controller  Node ha-138000-m03 event: Registered Node ha-138000-m03 in Controller
	  Normal   RegisteredNode           107s               node-controller  Node ha-138000-m03 event: Registered Node ha-138000-m03 in Controller
	  Normal   NodeNotReady             89s                node-controller  Node ha-138000-m03 status is now: NodeNotReady
	  Normal   Starting                 76s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  76s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  76s (x3 over 76s)  kubelet          Node ha-138000-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    76s (x3 over 76s)  kubelet          Node ha-138000-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     76s (x3 over 76s)  kubelet          Node ha-138000-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 76s (x2 over 76s)  kubelet          Node ha-138000-m03 has been rebooted, boot id: 3cf0bc98-5f0e-4a33-80fb-e0c2d84cf3db
	  Normal   NodeReady                76s (x2 over 76s)  kubelet          Node ha-138000-m03 status is now: NodeReady
	  Normal   RegisteredNode           69s                node-controller  Node ha-138000-m03 event: Registered Node ha-138000-m03 in Controller
	
	
	Name:               ha-138000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-138000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774
	                    minikube.k8s.io/name=ha-138000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_15T16_22_36_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 23:22:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-138000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 23:32:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 23:32:40 +0000   Thu, 15 Aug 2024 23:32:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 23:32:40 +0000   Thu, 15 Aug 2024 23:32:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 23:32:40 +0000   Thu, 15 Aug 2024 23:32:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 23:32:40 +0000   Thu, 15 Aug 2024 23:32:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.8
	  Hostname:    ha-138000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 f4edcad8d76a442b9919d65bbd5ebb03
	  System UUID:                e49846a0-0000-0000-a846-8a8b2da04ea9
	  Boot ID:                    7d49d130-2f84-43a9-9c3e-7a69f44367c4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-m887r       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-proxy-qpth7    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11s                kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-138000-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-138000-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-138000-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-138000-m04 event: Registered Node ha-138000-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-138000-m04 event: Registered Node ha-138000-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-138000-m04 event: Registered Node ha-138000-m04 in Controller
	  Normal   NodeReady                9m57s              kubelet          Node ha-138000-m04 status is now: NodeReady
	  Normal   RegisteredNode           9m3s               node-controller  Node ha-138000-m04 event: Registered Node ha-138000-m04 in Controller
	  Normal   RegisteredNode           2m10s              node-controller  Node ha-138000-m04 event: Registered Node ha-138000-m04 in Controller
	  Normal   RegisteredNode           108s               node-controller  Node ha-138000-m04 event: Registered Node ha-138000-m04 in Controller
	  Normal   NodeNotReady             90s                node-controller  Node ha-138000-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           70s                node-controller  Node ha-138000-m04 event: Registered Node ha-138000-m04 in Controller
	  Normal   Starting                 14s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  14s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 14s (x3 over 14s)  kubelet          Node ha-138000-m04 has been rebooted, boot id: 7d49d130-2f84-43a9-9c3e-7a69f44367c4
	  Normal   NodeHasSufficientMemory  14s (x4 over 14s)  kubelet          Node ha-138000-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14s (x4 over 14s)  kubelet          Node ha-138000-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14s (x4 over 14s)  kubelet          Node ha-138000-m04 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             14s                kubelet          Node ha-138000-m04 status is now: NodeNotReady
	  Normal   NodeReady                14s (x2 over 14s)  kubelet          Node ha-138000-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.035773] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.007968] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.680855] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000003] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.006866] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Aug15 23:30] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.162045] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.989029] systemd-fstab-generator[468]: Ignoring "noauto" option for root device
	[  +0.101466] systemd-fstab-generator[487]: Ignoring "noauto" option for root device
	[  +1.930620] systemd-fstab-generator[1017]: Ignoring "noauto" option for root device
	[  +0.060770] kauditd_printk_skb: 79 callbacks suppressed
	[  +0.229646] systemd-fstab-generator[1112]: Ignoring "noauto" option for root device
	[  +0.119765] systemd-fstab-generator[1124]: Ignoring "noauto" option for root device
	[  +0.123401] systemd-fstab-generator[1138]: Ignoring "noauto" option for root device
	[  +2.409334] systemd-fstab-generator[1357]: Ignoring "noauto" option for root device
	[  +0.114639] systemd-fstab-generator[1369]: Ignoring "noauto" option for root device
	[  +0.103538] systemd-fstab-generator[1381]: Ignoring "noauto" option for root device
	[  +0.135144] systemd-fstab-generator[1396]: Ignoring "noauto" option for root device
	[  +0.456371] systemd-fstab-generator[1560]: Ignoring "noauto" option for root device
	[  +6.803779] kauditd_printk_skb: 234 callbacks suppressed
	[ +21.488008] kauditd_printk_skb: 40 callbacks suppressed
	[ +18.019929] kauditd_printk_skb: 21 callbacks suppressed
	[Aug15 23:31] kauditd_printk_skb: 55 callbacks suppressed
	
	
	==> etcd [59dac0b44544] <==
	{"level":"info","ts":"2024-08-15T23:29:46.384063Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1864] sent MsgPreVote request to 2399e7dba5b18dfe at term 2"}
	{"level":"info","ts":"2024-08-15T23:29:46.384495Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1864] sent MsgPreVote request to c8daa22dc1df7d56 at term 2"}
	{"level":"warn","ts":"2024-08-15T23:29:46.408477Z","caller":"etcdserver/server.go:2139","msg":"failed to publish local member to cluster through raft","local-member-id":"b8c6c7563d17d844","local-member-attributes":"{Name:ha-138000 ClientURLs:[https://192.169.0.5:2379]}","request-path":"/0/members/b8c6c7563d17d844/attributes","publish-timeout":"7s","error":"etcdserver: request timed out"}
	{"level":"warn","ts":"2024-08-15T23:29:46.415071Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"2399e7dba5b18dfe","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T23:29:46.415120Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"2399e7dba5b18dfe","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T23:29:46.419833Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c8daa22dc1df7d56","rtt":"0s","error":"dial tcp 192.169.0.7:2380: i/o timeout"}
	{"level":"warn","ts":"2024-08-15T23:29:46.419980Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"c8daa22dc1df7d56","rtt":"0s","error":"dial tcp 192.169.0.7:2380: i/o timeout"}
	{"level":"warn","ts":"2024-08-15T23:29:46.732045Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740419357201672,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-15T23:29:47.233019Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740419357201672,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-08-15T23:29:47.382392Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-15T23:29:47.382847Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-15T23:29:47.383118Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-08-15T23:29:47.383277Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1864] sent MsgPreVote request to 2399e7dba5b18dfe at term 2"}
	{"level":"info","ts":"2024-08-15T23:29:47.383307Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1864] sent MsgPreVote request to c8daa22dc1df7d56 at term 2"}
	{"level":"warn","ts":"2024-08-15T23:29:47.734052Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740419357201672,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-15T23:29:48.244565Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740419357201672,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-08-15T23:29:48.381923Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-15T23:29:48.382007Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-15T23:29:48.382026Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-08-15T23:29:48.382045Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1864] sent MsgPreVote request to 2399e7dba5b18dfe at term 2"}
	{"level":"info","ts":"2024-08-15T23:29:48.382066Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1864] sent MsgPreVote request to c8daa22dc1df7d56 at term 2"}
	{"level":"warn","ts":"2024-08-15T23:29:48.745537Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740419357201672,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-15T23:29:49.013739Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"4.788785781s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-08-15T23:29:49.013790Z","caller":"traceutil/trace.go:171","msg":"trace[283476530] range","detail":"{range_begin:; range_end:; }","duration":"4.78884981s","start":"2024-08-15T23:29:44.224933Z","end":"2024-08-15T23:29:49.013782Z","steps":["trace[283476530] 'agreement among raft nodes before linearized reading'  (duration: 4.788783568s)"],"step_count":1}
	{"level":"error","ts":"2024-08-15T23:29:49.013846Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[-]linearizable_read failed: context canceled\n[+]data_corruption ok\n[+]serializable_read ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	
	
	==> etcd [67b207257b40] <==
	{"level":"warn","ts":"2024-08-15T23:31:28.335171Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b8c6c7563d17d844","from":"b8c6c7563d17d844","remote-peer-id":"c8daa22dc1df7d56","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:31:28.340125Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b8c6c7563d17d844","from":"b8c6c7563d17d844","remote-peer-id":"c8daa22dc1df7d56","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:31:28.341428Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b8c6c7563d17d844","from":"b8c6c7563d17d844","remote-peer-id":"c8daa22dc1df7d56","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:31:28.354371Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b8c6c7563d17d844","from":"b8c6c7563d17d844","remote-peer-id":"c8daa22dc1df7d56","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:31:28.454431Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b8c6c7563d17d844","from":"b8c6c7563d17d844","remote-peer-id":"c8daa22dc1df7d56","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:31:28.554698Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b8c6c7563d17d844","from":"b8c6c7563d17d844","remote-peer-id":"c8daa22dc1df7d56","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:31:28.659002Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b8c6c7563d17d844","from":"b8c6c7563d17d844","remote-peer-id":"c8daa22dc1df7d56","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:31:28.754861Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b8c6c7563d17d844","from":"b8c6c7563d17d844","remote-peer-id":"c8daa22dc1df7d56","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:31:28.856395Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b8c6c7563d17d844","from":"b8c6c7563d17d844","remote-peer-id":"c8daa22dc1df7d56","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:31:30.243311Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c8daa22dc1df7d56","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T23:31:30.243321Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"c8daa22dc1df7d56","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T23:31:31.422701Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.7:2380/version","remote-member-id":"c8daa22dc1df7d56","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T23:31:31.422810Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"c8daa22dc1df7d56","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T23:31:35.244337Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c8daa22dc1df7d56","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T23:31:35.244614Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"c8daa22dc1df7d56","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T23:31:35.424045Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.7:2380/version","remote-member-id":"c8daa22dc1df7d56","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T23:31:35.424130Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"c8daa22dc1df7d56","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"info","ts":"2024-08-15T23:31:38.649697Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"c8daa22dc1df7d56"}
	{"level":"info","ts":"2024-08-15T23:31:38.665692Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c8daa22dc1df7d56"}
	{"level":"info","ts":"2024-08-15T23:31:38.730507Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c8daa22dc1df7d56"}
	{"level":"info","ts":"2024-08-15T23:31:38.835070Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"c8daa22dc1df7d56","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-15T23:31:38.835279Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c8daa22dc1df7d56"}
	{"level":"info","ts":"2024-08-15T23:31:38.864581Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"c8daa22dc1df7d56","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-15T23:31:38.864626Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c8daa22dc1df7d56"}
	{"level":"warn","ts":"2024-08-15T23:31:40.245395Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c8daa22dc1df7d56","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	
	
	==> kernel <==
	 23:32:54 up 2 min,  0 users,  load average: 0.17, 0.19, 0.08
	Linux ha-138000 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [c2a16126718b] <==
	I0815 23:23:47.704130       1 main.go:322] Node ha-138000-m04 has CIDR [10.244.3.0/24] 
	I0815 23:23:57.712115       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0815 23:23:57.712139       1 main.go:299] handling current node
	I0815 23:23:57.712152       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0815 23:23:57.712157       1 main.go:322] Node ha-138000-m02 has CIDR [10.244.1.0/24] 
	I0815 23:23:57.712420       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0815 23:23:57.712543       1 main.go:322] Node ha-138000-m03 has CIDR [10.244.2.0/24] 
	I0815 23:23:57.712720       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0815 23:23:57.712823       1 main.go:322] Node ha-138000-m04 has CIDR [10.244.3.0/24] 
	I0815 23:24:07.712424       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0815 23:24:07.712474       1 main.go:299] handling current node
	I0815 23:24:07.712488       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0815 23:24:07.712494       1 main.go:322] Node ha-138000-m02 has CIDR [10.244.1.0/24] 
	I0815 23:24:07.712623       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0815 23:24:07.712704       1 main.go:322] Node ha-138000-m03 has CIDR [10.244.2.0/24] 
	I0815 23:24:07.712814       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0815 23:24:07.712851       1 main.go:322] Node ha-138000-m04 has CIDR [10.244.3.0/24] 
	I0815 23:24:17.705680       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0815 23:24:17.705716       1 main.go:322] Node ha-138000-m02 has CIDR [10.244.1.0/24] 
	I0815 23:24:17.706225       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0815 23:24:17.706282       1 main.go:322] Node ha-138000-m03 has CIDR [10.244.2.0/24] 
	I0815 23:24:17.706514       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0815 23:24:17.706582       1 main.go:322] Node ha-138000-m04 has CIDR [10.244.3.0/24] 
	I0815 23:24:17.706957       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0815 23:24:17.707108       1 main.go:299] handling current node
	
	
	==> kindnet [d35ee4327270] <==
	I0815 23:32:20.106395       1 main.go:299] handling current node
	I0815 23:32:30.109647       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0815 23:32:30.109819       1 main.go:299] handling current node
	I0815 23:32:30.110039       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0815 23:32:30.110182       1 main.go:322] Node ha-138000-m02 has CIDR [10.244.1.0/24] 
	I0815 23:32:30.110510       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0815 23:32:30.110595       1 main.go:322] Node ha-138000-m03 has CIDR [10.244.2.0/24] 
	I0815 23:32:30.110933       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0815 23:32:30.111022       1 main.go:322] Node ha-138000-m04 has CIDR [10.244.3.0/24] 
	I0815 23:32:40.105535       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0815 23:32:40.105612       1 main.go:322] Node ha-138000-m03 has CIDR [10.244.2.0/24] 
	I0815 23:32:40.105819       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0815 23:32:40.105982       1 main.go:322] Node ha-138000-m04 has CIDR [10.244.3.0/24] 
	I0815 23:32:40.106120       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0815 23:32:40.106193       1 main.go:299] handling current node
	I0815 23:32:40.106215       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0815 23:32:40.106293       1 main.go:322] Node ha-138000-m02 has CIDR [10.244.1.0/24] 
	I0815 23:32:50.106155       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0815 23:32:50.106270       1 main.go:299] handling current node
	I0815 23:32:50.106298       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0815 23:32:50.106308       1 main.go:322] Node ha-138000-m02 has CIDR [10.244.1.0/24] 
	I0815 23:32:50.106511       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0815 23:32:50.106560       1 main.go:322] Node ha-138000-m03 has CIDR [10.244.2.0/24] 
	I0815 23:32:50.106643       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0815 23:32:50.106701       1 main.go:322] Node ha-138000-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [5ed11c46e0eb] <==
	I0815 23:29:32.056397       1 options.go:228] external host was not specified, using 192.169.0.5
	I0815 23:29:32.057840       1 server.go:142] Version: v1.31.0
	I0815 23:29:32.057961       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 23:29:32.445995       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0815 23:29:32.449536       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0815 23:29:32.452083       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0815 23:29:32.452114       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0815 23:29:32.452276       1 instance.go:232] Using reconciler: lease
	W0815 23:29:49.041556       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:33594->127.0.0.1:2379: read: connection reset by peer"
	W0815 23:29:49.041696       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:33564->127.0.0.1:2379: read: connection reset by peer"
	W0815 23:29:49.041767       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:33580->127.0.0.1:2379: read: connection reset by peer"
	W0815 23:29:50.044022       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 23:29:50.044031       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 23:29:50.044267       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 23:29:51.372028       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 23:29:51.388445       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 23:29:51.855782       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0815 23:29:52.453885       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [a6baf6e21d6c] <==
	I0815 23:30:40.344140       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0815 23:30:40.344259       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0815 23:30:40.418768       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0815 23:30:40.419548       1 shared_informer.go:320] Caches are synced for configmaps
	I0815 23:30:40.420315       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0815 23:30:40.420931       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0815 23:30:40.424034       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0815 23:30:40.424129       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0815 23:30:40.424470       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0815 23:30:40.424883       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0815 23:30:40.425391       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0815 23:30:40.425745       1 aggregator.go:171] initial CRD sync complete...
	I0815 23:30:40.425776       1 autoregister_controller.go:144] Starting autoregister controller
	I0815 23:30:40.425782       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0815 23:30:40.425786       1 cache.go:39] Caches are synced for autoregister controller
	I0815 23:30:40.429758       1 shared_informer.go:320] Caches are synced for node_authorizer
	W0815 23:30:40.433000       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.6]
	I0815 23:30:40.451364       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0815 23:30:40.451641       1 policy_source.go:224] refreshing policies
	I0815 23:30:40.467536       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0815 23:30:40.536982       1 controller.go:615] quota admission added evaluator for: endpoints
	I0815 23:30:40.548680       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0815 23:30:40.556609       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0815 23:30:41.331073       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0815 23:30:41.666666       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.5]
	
	
	==> kube-controller-manager [2ed9ae042726] <==
	I0815 23:30:20.677986       1 serving.go:386] Generated self-signed cert in-memory
	I0815 23:30:20.928931       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0815 23:30:20.928987       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 23:30:20.930507       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0815 23:30:20.930593       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0815 23:30:20.931118       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0815 23:30:20.931317       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0815 23:30:40.940723       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-controller-manager [f4a0ec142726] <==
	I0815 23:31:24.544150       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m04"
	I0815 23:31:24.554575       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m03"
	I0815 23:31:24.555197       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m04"
	I0815 23:31:24.606976       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="43.863592ms"
	I0815 23:31:24.607212       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="136.494µs"
	I0815 23:31:26.811428       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m04"
	I0815 23:31:29.784539       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m03"
	I0815 23:31:34.097680       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="7.208476ms"
	I0815 23:31:34.098036       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="54.359µs"
	I0815 23:31:34.111201       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-g7wk5 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-g7wk5\": the object has been modified; please apply your changes to the latest version and try again"
	I0815 23:31:34.111585       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"0bde8909-370a-4104-803d-243eecab8628", APIVersion:"v1", ResourceVersion:"258", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-g7wk5 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-g7wk5": the object has been modified; please apply your changes to the latest version and try again
	I0815 23:31:36.890364       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m03"
	I0815 23:31:37.466268       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m03"
	I0815 23:31:37.479273       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m03"
	I0815 23:31:38.389032       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="35.893µs"
	I0815 23:31:39.577414       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="29.216738ms"
	I0815 23:31:39.577503       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="38.266µs"
	I0815 23:31:39.708978       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m03"
	I0815 23:31:39.869802       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m04"
	I0815 23:31:44.491193       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m04"
	I0815 23:31:44.581910       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m04"
	I0815 23:32:40.799384       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-138000-m04"
	I0815 23:32:40.799635       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m04"
	I0815 23:32:40.809568       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m04"
	I0815 23:32:41.795116       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m04"
	
	
	==> kube-proxy [3102e608c7d6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 23:30:59.351348       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0815 23:30:59.378221       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0815 23:30:59.378378       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 23:30:59.417171       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0815 23:30:59.417213       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0815 23:30:59.417230       1 server_linux.go:169] "Using iptables Proxier"
	I0815 23:30:59.420831       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 23:30:59.421491       1 server.go:483] "Version info" version="v1.31.0"
	I0815 23:30:59.421522       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 23:30:59.424760       1 config.go:197] "Starting service config controller"
	I0815 23:30:59.425626       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 23:30:59.426090       1 config.go:104] "Starting endpoint slice config controller"
	I0815 23:30:59.426116       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 23:30:59.427803       1 config.go:326] "Starting node config controller"
	I0815 23:30:59.428510       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 23:30:59.526834       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0815 23:30:59.526859       1 shared_informer.go:320] Caches are synced for service config
	I0815 23:30:59.528661       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [fc2e141007ef] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 23:19:33.922056       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0815 23:19:33.939645       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0815 23:19:33.939881       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 23:19:33.966815       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0815 23:19:33.966963       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0815 23:19:33.967061       1 server_linux.go:169] "Using iptables Proxier"
	I0815 23:19:33.969119       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 23:19:33.969437       1 server.go:483] "Version info" version="v1.31.0"
	I0815 23:19:33.969466       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 23:19:33.970289       1 config.go:197] "Starting service config controller"
	I0815 23:19:33.970403       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 23:19:33.970441       1 config.go:104] "Starting endpoint slice config controller"
	I0815 23:19:33.970446       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 23:19:33.970870       1 config.go:326] "Starting node config controller"
	I0815 23:19:33.970895       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 23:19:34.070930       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0815 23:19:34.070930       1 shared_informer.go:320] Caches are synced for node config
	I0815 23:19:34.070944       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [ac6935271595] <==
	W0815 23:29:03.654257       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.169.0.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0815 23:29:03.654675       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.169.0.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0815 23:29:04.192220       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0815 23:29:04.192311       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0815 23:29:07.683875       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0815 23:29:07.683942       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0815 23:29:07.708489       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.169.0.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0815 23:29:07.708791       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.169.0.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0815 23:29:17.257133       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0815 23:29:17.257240       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0815 23:29:26.626316       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0815 23:29:26.626443       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0815 23:29:29.967116       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.169.0.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0815 23:29:29.967155       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.169.0.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0815 23:29:42.147720       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.169.0.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	E0815 23:29:42.148149       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.169.0.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError"
	W0815 23:29:43.616204       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	E0815 23:29:43.616440       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError"
	W0815 23:29:45.922991       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	E0815 23:29:45.923106       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError"
	E0815 23:29:49.027901       1 server.go:267] "waiting for handlers to sync" err="context canceled"
	I0815 23:29:49.028326       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0815 23:29:49.028478       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0815 23:29:49.028500       1 shared_informer.go:316] "Unhandled Error" err="unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file" logger="UnhandledError"
	E0815 23:29:49.029058       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [c2ddb52a9846] <==
	I0815 23:30:20.706878       1 serving.go:386] Generated self-signed cert in-memory
	W0815 23:30:31.075526       1 authentication.go:370] Error looking up in-cluster authentication configuration: Get "https://192.169.0.5:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W0815 23:30:31.075552       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0815 23:30:31.075556       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0815 23:30:40.370669       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0815 23:30:40.370712       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 23:30:40.375435       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0815 23:30:40.379182       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0815 23:30:40.379313       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0815 23:30:40.379473       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0815 23:30:40.480276       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 15 23:30:46 ha-138000 kubelet[1567]: E0815 23:30:46.668035    1567 kuberuntime_manager.go:1272] "Unhandled Error" err="container &Container{Name:coredns,Image:registry.k8s.io/coredns/coredns:v1.11.1,Command:[],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:dns,HostPort:0,ContainerPort:53,Protocol:UDP,HostIP:,},ContainerPort{Name:dns-tcp,HostPort:0,ContainerPort:53,Protocol:TCP,HostIP:,},ContainerPort{Name:metrics,HostPort:0,ContainerPort:9153,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{memory: {{178257920 0} {<nil>} 170Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{73400320 0} {<nil>} 70Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8bnpm,ReadOnly:true,MountPath:/var/run/secrets/kubern
etes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8181 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunA
sGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod coredns-6f6b679f8f-dmgt5_kube-system(47d73953-ec2c-4f17-b2b8-d6a9b5e5a316): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Aug 15 23:30:46 ha-138000 kubelet[1567]: E0815 23:30:46.668170    1567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="default/busybox-7dff88458-wgww9" podUID="b8eb799e-e761-4647-8aae-388c38bc936e"
	Aug 15 23:30:46 ha-138000 kubelet[1567]: E0815 23:30:46.669336    1567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-6f6b679f8f-dmgt5" podUID="47d73953-ec2c-4f17-b2b8-d6a9b5e5a316"
	Aug 15 23:30:46 ha-138000 kubelet[1567]: E0815 23:30:46.669381    1567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-6f6b679f8f-zc8jj" podUID="b4a9df39-b09d-4bc3-97f6-b3176ff8e842"
	Aug 15 23:30:46 ha-138000 kubelet[1567]: E0815 23:30:46.669395    1567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kube-proxy-cznkn" podUID="61cd1a9a-80fb-4d0e-a4dd-f5ed2d6ddc0f"
	Aug 15 23:30:49 ha-138000 kubelet[1567]: I0815 23:30:49.205357    1567 scope.go:117] "RemoveContainer" containerID="2ed9ae04272666896274c0cc9cbac7e240c18a02b0b35eaab975e10a79d1a635"
	Aug 15 23:30:49 ha-138000 kubelet[1567]: E0815 23:30:49.205497    1567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-138000_kube-system(ed196a03081880609aebd781f662c0b9)\"" pod="kube-system/kube-controller-manager-ha-138000" podUID="ed196a03081880609aebd781f662c0b9"
	Aug 15 23:30:58 ha-138000 kubelet[1567]: I0815 23:30:58.825605    1567 scope.go:117] "RemoveContainer" containerID="3e8b806ef4f33fe0f0fca48027df27c689fecf6b07621dedc5ef13adcc0374c3"
	Aug 15 23:30:58 ha-138000 kubelet[1567]: I0815 23:30:58.827210    1567 scope.go:117] "RemoveContainer" containerID="fc2e141007efbc5a944ce056112991ed717c9f8dc75269aa7a0eac8f8dde6098"
	Aug 15 23:30:58 ha-138000 kubelet[1567]: I0815 23:30:58.827640    1567 scope.go:117] "RemoveContainer" containerID="c2a16126718b32a024e2d52492029acb6291ffb8595d909499955382a9b4b0d1"
	Aug 15 23:30:59 ha-138000 kubelet[1567]: I0815 23:30:59.824309    1567 scope.go:117] "RemoveContainer" containerID="6a1122913bb1811dd9cfff9fde8c221a2c969f80db1f0bcc1a66f58faaa88395"
	Aug 15 23:31:00 ha-138000 kubelet[1567]: I0815 23:31:00.825729    1567 scope.go:117] "RemoveContainer" containerID="8f20284cd3969cd69aa4dd7eb37b8d05c7df4f53aa8c6f636949fd401174eba1"
	Aug 15 23:31:01 ha-138000 kubelet[1567]: I0815 23:31:01.825360    1567 scope.go:117] "RemoveContainer" containerID="42f5d82b004174c93ffa1441e156ff5ca6d23b9457598805927d06b8823a41bd"
	Aug 15 23:31:03 ha-138000 kubelet[1567]: I0815 23:31:03.825285    1567 scope.go:117] "RemoveContainer" containerID="2ed9ae04272666896274c0cc9cbac7e240c18a02b0b35eaab975e10a79d1a635"
	Aug 15 23:31:12 ha-138000 kubelet[1567]: E0815 23:31:12.861012    1567 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 15 23:31:12 ha-138000 kubelet[1567]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 15 23:31:12 ha-138000 kubelet[1567]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 15 23:31:12 ha-138000 kubelet[1567]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 15 23:31:12 ha-138000 kubelet[1567]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 15 23:31:12 ha-138000 kubelet[1567]: I0815 23:31:12.976621    1567 scope.go:117] "RemoveContainer" containerID="e919017e14bb91f5bec7b5fdf0351f27904f841341d654e814d90d000a091f26"
	Aug 15 23:32:12 ha-138000 kubelet[1567]: E0815 23:32:12.862060    1567 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 15 23:32:12 ha-138000 kubelet[1567]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 15 23:32:12 ha-138000 kubelet[1567]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 15 23:32:12 ha-138000 kubelet[1567]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 15 23:32:12 ha-138000 kubelet[1567]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-138000 -n ha-138000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-138000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (4.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (81.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-138000 --control-plane -v=7 --alsologtostderr
E0815 16:33:57.691745    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/addons-640000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:605: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-138000 --control-plane -v=7 --alsologtostderr: (1m16.893175579s)
ha_test.go:611: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 status -v=7 --alsologtostderr
ha_test.go:616: status says not all three control-plane nodes are present: args "out/minikube-darwin-amd64 -p ha-138000 status -v=7 --alsologtostderr": ha-138000
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-138000-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-138000-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-138000-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha-138000-m05
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha_test.go:619: status says not all four hosts are running: args "out/minikube-darwin-amd64 -p ha-138000 status -v=7 --alsologtostderr": ha-138000
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-138000-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-138000-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-138000-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha-138000-m05
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha_test.go:622: status says not all four kubelets are running: args "out/minikube-darwin-amd64 -p ha-138000 status -v=7 --alsologtostderr": ha-138000
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-138000-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-138000-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-138000-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha-138000-m05
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha_test.go:625: status says not all three apiservers are running: args "out/minikube-darwin-amd64 -p ha-138000 status -v=7 --alsologtostderr": ha-138000
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-138000-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-138000-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-138000-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha-138000-m05
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-138000 -n ha-138000
helpers_test.go:244: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-138000 logs -n 25: (3.528908051s)
helpers_test.go:252: TestMultiControlPlane/serial/AddSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-138000 ssh -n                                                                                                             | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n ha-138000-m04 sudo cat                                                                                      | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | /home/docker/cp-test_ha-138000-m03_ha-138000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-138000 cp testdata/cp-test.txt                                                                                            | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n                                                                                                             | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-138000 cp ha-138000-m04:/home/docker/cp-test.txt                                                                          | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile1119105544/001/cp-test_ha-138000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n                                                                                                             | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-138000 cp ha-138000-m04:/home/docker/cp-test.txt                                                                          | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000:/home/docker/cp-test_ha-138000-m04_ha-138000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n                                                                                                             | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n ha-138000 sudo cat                                                                                          | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | /home/docker/cp-test_ha-138000-m04_ha-138000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-138000 cp ha-138000-m04:/home/docker/cp-test.txt                                                                          | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m02:/home/docker/cp-test_ha-138000-m04_ha-138000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n                                                                                                             | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n ha-138000-m02 sudo cat                                                                                      | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | /home/docker/cp-test_ha-138000-m04_ha-138000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-138000 cp ha-138000-m04:/home/docker/cp-test.txt                                                                          | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m03:/home/docker/cp-test_ha-138000-m04_ha-138000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n                                                                                                             | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n ha-138000-m03 sudo cat                                                                                      | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | /home/docker/cp-test_ha-138000-m04_ha-138000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-138000 node stop m02 -v=7                                                                                                 | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-138000 node start m02 -v=7                                                                                                | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:24 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-138000 -v=7                                                                                                       | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:24 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | -p ha-138000 -v=7                                                                                                            | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:24 PDT | 15 Aug 24 16:24 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-138000 --wait=true -v=7                                                                                                | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:24 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-138000                                                                                                            | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:26 PDT |                     |
	| node    | ha-138000 node delete m03 -v=7                                                                                               | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:26 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | ha-138000 stop -v=7                                                                                                          | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:27 PDT | 15 Aug 24 16:29 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-138000 --wait=true                                                                                                     | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:29 PDT | 15 Aug 24 16:32 PDT |
	|         | -v=7 --alsologtostderr                                                                                                       |           |         |         |                     |                     |
	|         | --driver=hyperkit                                                                                                            |           |         |         |                     |                     |
	| node    | add -p ha-138000                                                                                                             | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:32 PDT | 15 Aug 24 16:34 PDT |
	|         | --control-plane -v=7                                                                                                         |           |         |         |                     |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 16:29:54
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 16:29:54.033682    3848 out.go:345] Setting OutFile to fd 1 ...
	I0815 16:29:54.033848    3848 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:29:54.033854    3848 out.go:358] Setting ErrFile to fd 2...
	I0815 16:29:54.033858    3848 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:29:54.034027    3848 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-977/.minikube/bin
	I0815 16:29:54.035457    3848 out.go:352] Setting JSON to false
	I0815 16:29:54.058003    3848 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":1765,"bootTime":1723762829,"procs":428,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0815 16:29:54.058095    3848 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 16:29:54.080014    3848 out.go:177] * [ha-138000] minikube v1.33.1 on Darwin 14.6.1
	I0815 16:29:54.122634    3848 out.go:177]   - MINIKUBE_LOCATION=19452
	I0815 16:29:54.122696    3848 notify.go:220] Checking for updates...
	I0815 16:29:54.164406    3848 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19452-977/kubeconfig
	I0815 16:29:54.185700    3848 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0815 16:29:54.206554    3848 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 16:29:54.227614    3848 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-977/.minikube
	I0815 16:29:54.248519    3848 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 16:29:54.270441    3848 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:29:54.271125    3848 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:29:54.271225    3848 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:29:54.280836    3848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52223
	I0815 16:29:54.281188    3848 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:29:54.281595    3848 main.go:141] libmachine: Using API Version  1
	I0815 16:29:54.281610    3848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:29:54.281823    3848 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:29:54.281934    3848 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:29:54.282121    3848 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 16:29:54.282360    3848 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:29:54.282379    3848 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:29:54.290749    3848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52225
	I0815 16:29:54.291068    3848 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:29:54.291384    3848 main.go:141] libmachine: Using API Version  1
	I0815 16:29:54.291393    3848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:29:54.291633    3848 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:29:54.291762    3848 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:29:54.320542    3848 out.go:177] * Using the hyperkit driver based on existing profile
	I0815 16:29:54.362577    3848 start.go:297] selected driver: hyperkit
	I0815 16:29:54.362603    3848 start.go:901] validating driver "hyperkit" against &{Name:ha-138000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-138000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclas
s:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 16:29:54.362832    3848 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 16:29:54.363029    3848 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 16:29:54.363230    3848 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19452-977/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0815 16:29:54.372833    3848 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0815 16:29:54.376641    3848 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:29:54.376661    3848 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0815 16:29:54.379303    3848 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 16:29:54.379340    3848 cni.go:84] Creating CNI manager for ""
	I0815 16:29:54.379348    3848 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0815 16:29:54.379445    3848 start.go:340] cluster config:
	{Name:ha-138000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-138000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-t
iller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 16:29:54.379558    3848 iso.go:125] acquiring lock: {Name:mkb93fc66b60be15dffa9eb2999a4be8d8d4d218 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 16:29:54.421457    3848 out.go:177] * Starting "ha-138000" primary control-plane node in "ha-138000" cluster
	I0815 16:29:54.442393    3848 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 16:29:54.442490    3848 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19452-977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0815 16:29:54.442517    3848 cache.go:56] Caching tarball of preloaded images
	I0815 16:29:54.442747    3848 preload.go:172] Found /Users/jenkins/minikube-integration/19452-977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0815 16:29:54.442766    3848 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 16:29:54.442942    3848 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/config.json ...
	I0815 16:29:54.443891    3848 start.go:360] acquireMachinesLock for ha-138000: {Name:mkac8034c8a60617271f2754a03dcaf74fc4fef7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 16:29:54.444072    3848 start.go:364] duration metric: took 141.088µs to acquireMachinesLock for "ha-138000"
	I0815 16:29:54.444120    3848 start.go:96] Skipping create...Using existing machine configuration
	I0815 16:29:54.444137    3848 fix.go:54] fixHost starting: 
	I0815 16:29:54.444553    3848 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:29:54.444588    3848 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:29:54.453701    3848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52227
	I0815 16:29:54.454060    3848 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:29:54.454408    3848 main.go:141] libmachine: Using API Version  1
	I0815 16:29:54.454428    3848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:29:54.454668    3848 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:29:54.454795    3848 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:29:54.454900    3848 main.go:141] libmachine: (ha-138000) Calling .GetState
	I0815 16:29:54.455015    3848 main.go:141] libmachine: (ha-138000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:29:54.455069    3848 main.go:141] libmachine: (ha-138000) DBG | hyperkit pid from json: 3662
	I0815 16:29:54.455998    3848 main.go:141] libmachine: (ha-138000) DBG | hyperkit pid 3662 missing from process table
	I0815 16:29:54.456024    3848 fix.go:112] recreateIfNeeded on ha-138000: state=Stopped err=<nil>
	I0815 16:29:54.456037    3848 main.go:141] libmachine: (ha-138000) Calling .DriverName
	W0815 16:29:54.456128    3848 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 16:29:54.477408    3848 out.go:177] * Restarting existing hyperkit VM for "ha-138000" ...
	I0815 16:29:54.498281    3848 main.go:141] libmachine: (ha-138000) Calling .Start
	I0815 16:29:54.498449    3848 main.go:141] libmachine: (ha-138000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:29:54.498522    3848 main.go:141] libmachine: (ha-138000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/hyperkit.pid
	I0815 16:29:54.498549    3848 main.go:141] libmachine: (ha-138000) DBG | Using UUID bf1b12d0-37a9-4c04-a028-0dd0a6dcd337
	I0815 16:29:54.612230    3848 main.go:141] libmachine: (ha-138000) DBG | Generated MAC 66:4d:cd:54:35:15
	I0815 16:29:54.612256    3848 main.go:141] libmachine: (ha-138000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000
	I0815 16:29:54.612403    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:54 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"bf1b12d0-37a9-4c04-a028-0dd0a6dcd337", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002a9530)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0815 16:29:54.612447    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:54 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"bf1b12d0-37a9-4c04-a028-0dd0a6dcd337", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002a9530)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0815 16:29:54.612479    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:54 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "bf1b12d0-37a9-4c04-a028-0dd0a6dcd337", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/ha-138000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/tty,log=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/bzimage,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/initrd,earlyprintk=serial l
oglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000"}
	I0815 16:29:54.612534    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:54 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U bf1b12d0-37a9-4c04-a028-0dd0a6dcd337 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/ha-138000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/tty,log=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/console-ring -f kexec,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/bzimage,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset noresto
re waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000"
	I0815 16:29:54.612554    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:54 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0815 16:29:54.613954    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:54 DEBUG: hyperkit: Pid is 3862
	I0815 16:29:54.614352    3848 main.go:141] libmachine: (ha-138000) DBG | Attempt 0
	I0815 16:29:54.614367    3848 main.go:141] libmachine: (ha-138000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:29:54.614458    3848 main.go:141] libmachine: (ha-138000) DBG | hyperkit pid from json: 3862
	I0815 16:29:54.615668    3848 main.go:141] libmachine: (ha-138000) DBG | Searching for 66:4d:cd:54:35:15 in /var/db/dhcpd_leases ...
	I0815 16:29:54.615762    3848 main.go:141] libmachine: (ha-138000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0815 16:29:54.615788    3848 main.go:141] libmachine: (ha-138000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66be8f71}
	I0815 16:29:54.615808    3848 main.go:141] libmachine: (ha-138000) DBG | Found match: 66:4d:cd:54:35:15
	I0815 16:29:54.615836    3848 main.go:141] libmachine: (ha-138000) DBG | IP: 192.169.0.5
	I0815 16:29:54.615932    3848 main.go:141] libmachine: (ha-138000) Calling .GetConfigRaw
	I0815 16:29:54.616670    3848 main.go:141] libmachine: (ha-138000) Calling .GetIP
	I0815 16:29:54.616859    3848 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/config.json ...
	I0815 16:29:54.617254    3848 machine.go:93] provisionDockerMachine start ...
	I0815 16:29:54.617264    3848 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:29:54.617414    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:29:54.617528    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:29:54.617607    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:29:54.617679    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:29:54.617801    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:29:54.617967    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:29:54.618192    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0815 16:29:54.618201    3848 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 16:29:54.621800    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:54 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0815 16:29:54.673574    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:54 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0815 16:29:54.674258    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0815 16:29:54.674277    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0815 16:29:54.674284    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0815 16:29:54.674293    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0815 16:29:55.057707    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:55 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0815 16:29:55.057723    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:55 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0815 16:29:55.172245    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:55 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0815 16:29:55.172277    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:55 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0815 16:29:55.172313    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:55 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0815 16:29:55.172333    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:55 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0815 16:29:55.173142    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:55 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0815 16:29:55.173153    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:55 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0815 16:30:00.749814    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:30:00 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0815 16:30:00.749867    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:30:00 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0815 16:30:00.749877    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:30:00 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0815 16:30:00.774690    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:30:00 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0815 16:30:05.697072    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 16:30:05.697084    3848 main.go:141] libmachine: (ha-138000) Calling .GetMachineName
	I0815 16:30:05.697230    3848 buildroot.go:166] provisioning hostname "ha-138000"
	I0815 16:30:05.697241    3848 main.go:141] libmachine: (ha-138000) Calling .GetMachineName
	I0815 16:30:05.697340    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:30:05.697431    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:30:05.697531    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:05.697615    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:05.697729    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:30:05.697864    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:30:05.698023    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0815 16:30:05.698032    3848 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-138000 && echo "ha-138000" | sudo tee /etc/hostname
	I0815 16:30:05.773271    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-138000
	
	I0815 16:30:05.773290    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:30:05.773430    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:30:05.773543    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:05.773660    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:05.773777    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:30:05.773935    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:30:05.774084    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0815 16:30:05.774095    3848 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-138000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-138000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-138000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 16:30:05.843913    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 16:30:05.843933    3848 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19452-977/.minikube CaCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19452-977/.minikube}
	I0815 16:30:05.843947    3848 buildroot.go:174] setting up certificates
	I0815 16:30:05.843955    3848 provision.go:84] configureAuth start
	I0815 16:30:05.843962    3848 main.go:141] libmachine: (ha-138000) Calling .GetMachineName
	I0815 16:30:05.844101    3848 main.go:141] libmachine: (ha-138000) Calling .GetIP
	I0815 16:30:05.844215    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:30:05.844315    3848 provision.go:143] copyHostCerts
	I0815 16:30:05.844350    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem
	I0815 16:30:05.844436    3848 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem, removing ...
	I0815 16:30:05.844445    3848 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem
	I0815 16:30:05.844633    3848 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem (1082 bytes)
	I0815 16:30:05.844853    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem
	I0815 16:30:05.844900    3848 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem, removing ...
	I0815 16:30:05.844906    3848 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem
	I0815 16:30:05.844989    3848 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem (1123 bytes)
	I0815 16:30:05.845165    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem
	I0815 16:30:05.845202    3848 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem, removing ...
	I0815 16:30:05.845207    3848 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem
	I0815 16:30:05.845283    3848 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem (1679 bytes)
	I0815 16:30:05.845432    3848 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem org=jenkins.ha-138000 san=[127.0.0.1 192.169.0.5 ha-138000 localhost minikube]
	I0815 16:30:06.272971    3848 provision.go:177] copyRemoteCerts
	I0815 16:30:06.273031    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 16:30:06.273048    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:30:06.273185    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:30:06.273289    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:06.273389    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:30:06.273476    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/id_rsa Username:docker}
	I0815 16:30:06.313671    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0815 16:30:06.313804    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 16:30:06.335207    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0815 16:30:06.335264    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0815 16:30:06.355028    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0815 16:30:06.355085    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 16:30:06.374691    3848 provision.go:87] duration metric: took 530.722569ms to configureAuth
	I0815 16:30:06.374705    3848 buildroot.go:189] setting minikube options for container-runtime
	I0815 16:30:06.374882    3848 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:30:06.374898    3848 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:30:06.375031    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:30:06.375135    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:30:06.375215    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:06.375302    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:06.375381    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:30:06.375501    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:30:06.375633    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0815 16:30:06.375641    3848 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0815 16:30:06.439797    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0815 16:30:06.439813    3848 buildroot.go:70] root file system type: tmpfs
	I0815 16:30:06.439885    3848 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0815 16:30:06.439896    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:30:06.440029    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:30:06.440119    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:06.440211    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:06.440322    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:30:06.440461    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:30:06.440594    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0815 16:30:06.440647    3848 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0815 16:30:06.516125    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0815 16:30:06.516150    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:30:06.516294    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:30:06.516408    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:06.516493    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:06.516594    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:30:06.516721    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:30:06.516850    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0815 16:30:06.516863    3848 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0815 16:30:08.163546    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0815 16:30:08.163562    3848 machine.go:96] duration metric: took 13.546346493s to provisionDockerMachine
	I0815 16:30:08.163573    3848 start.go:293] postStartSetup for "ha-138000" (driver="hyperkit")
	I0815 16:30:08.163581    3848 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 16:30:08.163591    3848 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:30:08.163828    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 16:30:08.163844    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:30:08.163938    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:30:08.164036    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:08.164139    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:30:08.164243    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/id_rsa Username:docker}
	I0815 16:30:08.204020    3848 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 16:30:08.207179    3848 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 16:30:08.207192    3848 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19452-977/.minikube/addons for local assets ...
	I0815 16:30:08.207302    3848 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19452-977/.minikube/files for local assets ...
	I0815 16:30:08.207487    3848 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> 14982.pem in /etc/ssl/certs
	I0815 16:30:08.207494    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> /etc/ssl/certs/14982.pem
	I0815 16:30:08.207699    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 16:30:08.215716    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem --> /etc/ssl/certs/14982.pem (1708 bytes)
	I0815 16:30:08.234526    3848 start.go:296] duration metric: took 70.944461ms for postStartSetup
	I0815 16:30:08.234554    3848 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:30:08.234725    3848 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0815 16:30:08.234737    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:30:08.234828    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:30:08.234919    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:08.235004    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:30:08.235082    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/id_rsa Username:docker}
	I0815 16:30:08.273169    3848 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0815 16:30:08.273225    3848 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0815 16:30:08.324608    3848 fix.go:56] duration metric: took 13.880521363s for fixHost
	I0815 16:30:08.324634    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:30:08.324763    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:30:08.324864    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:08.324958    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:08.325046    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:30:08.325174    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:30:08.325312    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0815 16:30:08.325319    3848 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 16:30:08.390142    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723764608.424079213
	
	I0815 16:30:08.390153    3848 fix.go:216] guest clock: 1723764608.424079213
	I0815 16:30:08.390158    3848 fix.go:229] Guest: 2024-08-15 16:30:08.424079213 -0700 PDT Remote: 2024-08-15 16:30:08.324621 -0700 PDT m=+14.326357489 (delta=99.458213ms)
	I0815 16:30:08.390181    3848 fix.go:200] guest clock delta is within tolerance: 99.458213ms
	I0815 16:30:08.390185    3848 start.go:83] releasing machines lock for "ha-138000", held for 13.946148575s
	I0815 16:30:08.390205    3848 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:30:08.390341    3848 main.go:141] libmachine: (ha-138000) Calling .GetIP
	I0815 16:30:08.390446    3848 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:30:08.390809    3848 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:30:08.390921    3848 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:30:08.390989    3848 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 16:30:08.391019    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:30:08.391075    3848 ssh_runner.go:195] Run: cat /version.json
	I0815 16:30:08.391087    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:30:08.391112    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:30:08.391203    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:30:08.391220    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:08.391315    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:30:08.391333    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:08.391411    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:30:08.391426    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/id_rsa Username:docker}
	I0815 16:30:08.391513    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/id_rsa Username:docker}
	I0815 16:30:08.423504    3848 ssh_runner.go:195] Run: systemctl --version
	I0815 16:30:08.428371    3848 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 16:30:08.479207    3848 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 16:30:08.479307    3848 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 16:30:08.492318    3848 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 16:30:08.492331    3848 start.go:495] detecting cgroup driver to use...
	I0815 16:30:08.492428    3848 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 16:30:08.510522    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0815 16:30:08.519382    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0815 16:30:08.528348    3848 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0815 16:30:08.528399    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0815 16:30:08.537505    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 16:30:08.546478    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0815 16:30:08.555462    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 16:30:08.564389    3848 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 16:30:08.573622    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0815 16:30:08.582698    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0815 16:30:08.591735    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0815 16:30:08.600760    3848 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 16:30:08.609049    3848 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 16:30:08.617235    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:30:08.722765    3848 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0815 16:30:08.746033    3848 start.go:495] detecting cgroup driver to use...
	I0815 16:30:08.746116    3848 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0815 16:30:08.759830    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 16:30:08.771599    3848 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 16:30:08.789529    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 16:30:08.802787    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0815 16:30:08.815377    3848 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0815 16:30:08.844257    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0815 16:30:08.860249    3848 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 16:30:08.875283    3848 ssh_runner.go:195] Run: which cri-dockerd
	I0815 16:30:08.878327    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0815 16:30:08.886411    3848 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0815 16:30:08.899899    3848 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0815 16:30:09.005084    3848 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0815 16:30:09.128876    3848 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0815 16:30:09.128948    3848 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0815 16:30:09.143602    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:30:09.247986    3848 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0815 16:30:11.515907    3848 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.267909782s)
	I0815 16:30:11.515971    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0815 16:30:11.526125    3848 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0815 16:30:11.539600    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0815 16:30:11.550726    3848 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0815 16:30:11.659005    3848 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0815 16:30:11.764312    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:30:11.871322    3848 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0815 16:30:11.884643    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0815 16:30:11.896838    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:30:12.002912    3848 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0815 16:30:12.062997    3848 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0815 16:30:12.063089    3848 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0815 16:30:12.067549    3848 start.go:563] Will wait 60s for crictl version
	I0815 16:30:12.067596    3848 ssh_runner.go:195] Run: which crictl
	I0815 16:30:12.070446    3848 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 16:30:12.096434    3848 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0815 16:30:12.096513    3848 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0815 16:30:12.116037    3848 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0815 16:30:12.178340    3848 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0815 16:30:12.178421    3848 main.go:141] libmachine: (ha-138000) Calling .GetIP
	I0815 16:30:12.178824    3848 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0815 16:30:12.183375    3848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 16:30:12.193025    3848 kubeadm.go:883] updating cluster {Name:ha-138000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:ha-138000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false f
reshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 16:30:12.193108    3848 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 16:30:12.193158    3848 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0815 16:30:12.206441    3848 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0815 16:30:12.206452    3848 docker.go:615] Images already preloaded, skipping extraction
	I0815 16:30:12.206519    3848 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0815 16:30:12.219546    3848 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0815 16:30:12.219565    3848 cache_images.go:84] Images are preloaded, skipping loading
	I0815 16:30:12.219576    3848 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.0 docker true true} ...
	I0815 16:30:12.219652    3848 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-138000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-138000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 16:30:12.219721    3848 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0815 16:30:12.258519    3848 cni.go:84] Creating CNI manager for ""
	I0815 16:30:12.258529    3848 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0815 16:30:12.258542    3848 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 16:30:12.258557    3848 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-138000 NodeName:ha-138000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 16:30:12.258636    3848 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-138000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 16:30:12.258649    3848 kube-vip.go:115] generating kube-vip config ...
	I0815 16:30:12.258696    3848 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0815 16:30:12.271337    3848 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0815 16:30:12.271407    3848 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0815 16:30:12.271468    3848 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 16:30:12.279197    3848 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 16:30:12.279243    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0815 16:30:12.286309    3848 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0815 16:30:12.299687    3848 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 16:30:12.313389    3848 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0815 16:30:12.327846    3848 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0815 16:30:12.341535    3848 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0815 16:30:12.344364    3848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 16:30:12.353627    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:30:12.452370    3848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 16:30:12.466830    3848 certs.go:68] Setting up /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000 for IP: 192.169.0.5
	I0815 16:30:12.466842    3848 certs.go:194] generating shared ca certs ...
	I0815 16:30:12.466852    3848 certs.go:226] acquiring lock for ca certs: {Name:mka1c88019f6064d2983ac988db71a67aeb65696 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:30:12.467038    3848 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.key
	I0815 16:30:12.467111    3848 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.key
	I0815 16:30:12.467121    3848 certs.go:256] generating profile certs ...
	I0815 16:30:12.467229    3848 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/client.key
	I0815 16:30:12.467304    3848 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.key.7af4c91a
	I0815 16:30:12.467369    3848 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.key
	I0815 16:30:12.467377    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0815 16:30:12.467397    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0815 16:30:12.467414    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0815 16:30:12.467432    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0815 16:30:12.467450    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0815 16:30:12.467479    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0815 16:30:12.467508    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0815 16:30:12.467527    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0815 16:30:12.467627    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498.pem (1338 bytes)
	W0815 16:30:12.467674    3848 certs.go:480] ignoring /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498_empty.pem, impossibly tiny 0 bytes
	I0815 16:30:12.467683    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 16:30:12.467721    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem (1082 bytes)
	I0815 16:30:12.467762    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem (1123 bytes)
	I0815 16:30:12.467793    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem (1679 bytes)
	I0815 16:30:12.467866    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem (1708 bytes)
	I0815 16:30:12.467898    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:30:12.467918    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498.pem -> /usr/share/ca-certificates/1498.pem
	I0815 16:30:12.467935    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> /usr/share/ca-certificates/14982.pem
	I0815 16:30:12.468350    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 16:30:12.503573    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 16:30:12.529609    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 16:30:12.555283    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 16:30:12.583638    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0815 16:30:12.612822    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 16:30:12.658082    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 16:30:12.709731    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 16:30:12.747480    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 16:30:12.797444    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498.pem --> /usr/share/ca-certificates/1498.pem (1338 bytes)
	I0815 16:30:12.830947    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem --> /usr/share/ca-certificates/14982.pem (1708 bytes)
	I0815 16:30:12.850811    3848 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 16:30:12.864245    3848 ssh_runner.go:195] Run: openssl version
	I0815 16:30:12.868404    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 16:30:12.876802    3848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:30:12.880151    3848 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:05 /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:30:12.880186    3848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:30:12.884283    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 16:30:12.892538    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1498.pem && ln -fs /usr/share/ca-certificates/1498.pem /etc/ssl/certs/1498.pem"
	I0815 16:30:12.900652    3848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1498.pem
	I0815 16:30:12.904017    3848 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 23:14 /usr/share/ca-certificates/1498.pem
	I0815 16:30:12.904050    3848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1498.pem
	I0815 16:30:12.908285    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1498.pem /etc/ssl/certs/51391683.0"
	I0815 16:30:12.916567    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14982.pem && ln -fs /usr/share/ca-certificates/14982.pem /etc/ssl/certs/14982.pem"
	I0815 16:30:12.924847    3848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14982.pem
	I0815 16:30:12.928159    3848 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 23:14 /usr/share/ca-certificates/14982.pem
	I0815 16:30:12.928193    3848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14982.pem
	I0815 16:30:12.932352    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14982.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 16:30:12.940679    3848 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 16:30:12.943953    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 16:30:12.948281    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 16:30:12.952498    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 16:30:12.956859    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 16:30:12.961066    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 16:30:12.965237    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 16:30:12.969424    3848 kubeadm.go:392] StartCluster: {Name:ha-138000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:ha-138000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 16:30:12.969537    3848 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0815 16:30:12.983217    3848 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 16:30:12.990985    3848 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 16:30:12.990998    3848 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 16:30:12.991037    3848 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 16:30:12.998611    3848 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 16:30:12.998906    3848 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-138000" does not appear in /Users/jenkins/minikube-integration/19452-977/kubeconfig
	I0815 16:30:12.998990    3848 kubeconfig.go:62] /Users/jenkins/minikube-integration/19452-977/kubeconfig needs updating (will repair): [kubeconfig missing "ha-138000" cluster setting kubeconfig missing "ha-138000" context setting]
	I0815 16:30:12.999150    3848 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-977/kubeconfig: {Name:mk0f04b0199f84dc20eb294d5b790451b41e43fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:30:12.999761    3848 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19452-977/kubeconfig
	I0815 16:30:12.999936    3848 kapi.go:59] client config for ha-138000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/client.key", CAFile:"/Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Use
rAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x3a6cf60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0815 16:30:13.000222    3848 cert_rotation.go:140] Starting client certificate rotation controller
	I0815 16:30:13.000394    3848 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 16:30:13.007927    3848 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0815 16:30:13.007944    3848 kubeadm.go:597] duration metric: took 16.941718ms to restartPrimaryControlPlane
	I0815 16:30:13.007950    3848 kubeadm.go:394] duration metric: took 38.534887ms to StartCluster
	I0815 16:30:13.007960    3848 settings.go:142] acquiring lock: {Name:mk694dad19d37394fa6b13c51a7dc54b62e97c12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:30:13.008036    3848 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19452-977/kubeconfig
	I0815 16:30:13.008396    3848 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-977/kubeconfig: {Name:mk0f04b0199f84dc20eb294d5b790451b41e43fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:30:13.008625    3848 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 16:30:13.008644    3848 start.go:241] waiting for startup goroutines ...
	I0815 16:30:13.008652    3848 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 16:30:13.008752    3848 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:30:13.052465    3848 out.go:177] * Enabled addons: 
	I0815 16:30:13.073695    3848 addons.go:510] duration metric: took 65.048594ms for enable addons: enabled=[]
	I0815 16:30:13.073733    3848 start.go:246] waiting for cluster config update ...
	I0815 16:30:13.073745    3848 start.go:255] writing updated cluster config ...
	I0815 16:30:13.095512    3848 out.go:201] 
	I0815 16:30:13.116951    3848 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:30:13.117068    3848 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/config.json ...
	I0815 16:30:13.139649    3848 out.go:177] * Starting "ha-138000-m02" control-plane node in "ha-138000" cluster
	I0815 16:30:13.181551    3848 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 16:30:13.181610    3848 cache.go:56] Caching tarball of preloaded images
	I0815 16:30:13.181807    3848 preload.go:172] Found /Users/jenkins/minikube-integration/19452-977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0815 16:30:13.181826    3848 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 16:30:13.181935    3848 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/config.json ...
	I0815 16:30:13.182895    3848 start.go:360] acquireMachinesLock for ha-138000-m02: {Name:mkac8034c8a60617271f2754a03dcaf74fc4fef7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 16:30:13.183018    3848 start.go:364] duration metric: took 98.069µs to acquireMachinesLock for "ha-138000-m02"
	I0815 16:30:13.183044    3848 start.go:96] Skipping create...Using existing machine configuration
	I0815 16:30:13.183051    3848 fix.go:54] fixHost starting: m02
	I0815 16:30:13.183444    3848 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:30:13.183470    3848 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:30:13.192973    3848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52251
	I0815 16:30:13.193340    3848 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:30:13.193664    3848 main.go:141] libmachine: Using API Version  1
	I0815 16:30:13.193677    3848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:30:13.193949    3848 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:30:13.194068    3848 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:30:13.194158    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetState
	I0815 16:30:13.194250    3848 main.go:141] libmachine: (ha-138000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:30:13.194330    3848 main.go:141] libmachine: (ha-138000-m02) DBG | hyperkit pid from json: 3670
	I0815 16:30:13.195266    3848 main.go:141] libmachine: (ha-138000-m02) DBG | hyperkit pid 3670 missing from process table
	I0815 16:30:13.195300    3848 fix.go:112] recreateIfNeeded on ha-138000-m02: state=Stopped err=<nil>
	I0815 16:30:13.195308    3848 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	W0815 16:30:13.195387    3848 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 16:30:13.216598    3848 out.go:177] * Restarting existing hyperkit VM for "ha-138000-m02" ...
	I0815 16:30:13.258591    3848 main.go:141] libmachine: (ha-138000-m02) Calling .Start
	I0815 16:30:13.258850    3848 main.go:141] libmachine: (ha-138000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:30:13.258951    3848 main.go:141] libmachine: (ha-138000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/hyperkit.pid
	I0815 16:30:13.260726    3848 main.go:141] libmachine: (ha-138000-m02) DBG | hyperkit pid 3670 missing from process table
	I0815 16:30:13.260746    3848 main.go:141] libmachine: (ha-138000-m02) DBG | pid 3670 is in state "Stopped"
	I0815 16:30:13.260762    3848 main.go:141] libmachine: (ha-138000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/hyperkit.pid...
	I0815 16:30:13.261090    3848 main.go:141] libmachine: (ha-138000-m02) DBG | Using UUID 4cff9b5a-9fe3-4215-9139-05f05b79bce3
	I0815 16:30:13.290755    3848 main.go:141] libmachine: (ha-138000-m02) DBG | Generated MAC 9a:c2:e9:d7:1c:58
	I0815 16:30:13.290775    3848 main.go:141] libmachine: (ha-138000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000
	I0815 16:30:13.290894    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"4cff9b5a-9fe3-4215-9139-05f05b79bce3", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003beae0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0815 16:30:13.290919    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"4cff9b5a-9fe3-4215-9139-05f05b79bce3", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003beae0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0815 16:30:13.290973    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "4cff9b5a-9fe3-4215-9139-05f05b79bce3", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/ha-138000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/tty,log=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/bzimage,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-13
8000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000"}
	I0815 16:30:13.291003    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 4cff9b5a-9fe3-4215-9139-05f05b79bce3 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/ha-138000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/tty,log=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/bzimage,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 co
nsole=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000"
	I0815 16:30:13.291039    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0815 16:30:13.292431    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 DEBUG: hyperkit: Pid is 4167
	I0815 16:30:13.292922    3848 main.go:141] libmachine: (ha-138000-m02) DBG | Attempt 0
	I0815 16:30:13.292931    3848 main.go:141] libmachine: (ha-138000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:30:13.292988    3848 main.go:141] libmachine: (ha-138000-m02) DBG | hyperkit pid from json: 4167
	I0815 16:30:13.294816    3848 main.go:141] libmachine: (ha-138000-m02) DBG | Searching for 9a:c2:e9:d7:1c:58 in /var/db/dhcpd_leases ...
	I0815 16:30:13.294866    3848 main.go:141] libmachine: (ha-138000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0815 16:30:13.294889    3848 main.go:141] libmachine: (ha-138000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:30:13.294903    3848 main.go:141] libmachine: (ha-138000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfdfcb}
	I0815 16:30:13.294915    3848 main.go:141] libmachine: (ha-138000-m02) DBG | Found match: 9a:c2:e9:d7:1c:58
	I0815 16:30:13.294931    3848 main.go:141] libmachine: (ha-138000-m02) DBG | IP: 192.169.0.6
	I0815 16:30:13.294997    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetConfigRaw
	I0815 16:30:13.295728    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetIP
	I0815 16:30:13.295920    3848 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/config.json ...
	I0815 16:30:13.296384    3848 machine.go:93] provisionDockerMachine start ...
	I0815 16:30:13.296394    3848 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:30:13.296516    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:30:13.296606    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:30:13.296695    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:13.296801    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:13.296905    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:30:13.297071    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:30:13.297242    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0815 16:30:13.297249    3848 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 16:30:13.300476    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0815 16:30:13.310276    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0815 16:30:13.311421    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0815 16:30:13.311448    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0815 16:30:13.311463    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0815 16:30:13.311475    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0815 16:30:13.698130    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0815 16:30:13.698145    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0815 16:30:13.812764    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0815 16:30:13.812785    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0815 16:30:13.812794    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0815 16:30:13.812888    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0815 16:30:13.813620    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0815 16:30:13.813637    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0815 16:30:19.405369    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:19 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0815 16:30:19.405428    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:19 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0815 16:30:19.405441    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:19 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0815 16:30:19.429063    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:19 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0815 16:30:24.364782    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 16:30:24.364794    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetMachineName
	I0815 16:30:24.364947    3848 buildroot.go:166] provisioning hostname "ha-138000-m02"
	I0815 16:30:24.364958    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetMachineName
	I0815 16:30:24.365057    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:30:24.365147    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:30:24.365238    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:24.365323    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:24.365453    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:30:24.365589    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:30:24.365741    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0815 16:30:24.365749    3848 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-138000-m02 && echo "ha-138000-m02" | sudo tee /etc/hostname
	I0815 16:30:24.435748    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-138000-m02
	
	I0815 16:30:24.435762    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:30:24.435893    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:30:24.435990    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:24.436082    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:24.436186    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:30:24.436313    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:30:24.436463    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0815 16:30:24.436475    3848 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-138000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-138000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-138000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 16:30:24.504475    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 16:30:24.504492    3848 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19452-977/.minikube CaCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19452-977/.minikube}
	I0815 16:30:24.504503    3848 buildroot.go:174] setting up certificates
	I0815 16:30:24.504519    3848 provision.go:84] configureAuth start
	I0815 16:30:24.504526    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetMachineName
	I0815 16:30:24.504663    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetIP
	I0815 16:30:24.504758    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:30:24.504846    3848 provision.go:143] copyHostCerts
	I0815 16:30:24.504877    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem
	I0815 16:30:24.504929    3848 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem, removing ...
	I0815 16:30:24.504935    3848 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem
	I0815 16:30:24.505124    3848 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem (1082 bytes)
	I0815 16:30:24.505339    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem
	I0815 16:30:24.505371    3848 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem, removing ...
	I0815 16:30:24.505375    3848 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem
	I0815 16:30:24.505446    3848 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem (1123 bytes)
	I0815 16:30:24.505596    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem
	I0815 16:30:24.505624    3848 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem, removing ...
	I0815 16:30:24.505628    3848 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem
	I0815 16:30:24.505696    3848 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem (1679 bytes)
	I0815 16:30:24.505845    3848 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem org=jenkins.ha-138000-m02 san=[127.0.0.1 192.169.0.6 ha-138000-m02 localhost minikube]
	I0815 16:30:24.669808    3848 provision.go:177] copyRemoteCerts
	I0815 16:30:24.669859    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 16:30:24.669875    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:30:24.670016    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:30:24.670138    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:24.670247    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:30:24.670341    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/id_rsa Username:docker}
	I0815 16:30:24.707125    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0815 16:30:24.707202    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 16:30:24.726013    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0815 16:30:24.726070    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0815 16:30:24.745370    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0815 16:30:24.745429    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 16:30:24.765407    3848 provision.go:87] duration metric: took 260.879651ms to configureAuth
	I0815 16:30:24.765419    3848 buildroot.go:189] setting minikube options for container-runtime
	I0815 16:30:24.765586    3848 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:30:24.765614    3848 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:30:24.765750    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:30:24.765841    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:30:24.765917    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:24.765992    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:24.766073    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:30:24.766180    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:30:24.766348    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0815 16:30:24.766356    3848 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0815 16:30:24.825444    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0815 16:30:24.825455    3848 buildroot.go:70] root file system type: tmpfs
	I0815 16:30:24.825535    3848 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0815 16:30:24.825546    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:30:24.825668    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:30:24.825761    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:24.825848    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:24.825931    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:30:24.826067    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:30:24.826205    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0815 16:30:24.826249    3848 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0815 16:30:24.894944    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0815 16:30:24.894961    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:30:24.895099    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:30:24.895204    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:24.895287    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:24.895382    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:30:24.895505    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:30:24.895640    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0815 16:30:24.895652    3848 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0815 16:30:26.552071    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0815 16:30:26.552086    3848 machine.go:96] duration metric: took 13.255738864s to provisionDockerMachine
	I0815 16:30:26.552093    3848 start.go:293] postStartSetup for "ha-138000-m02" (driver="hyperkit")
	I0815 16:30:26.552100    3848 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 16:30:26.552110    3848 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:30:26.552311    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 16:30:26.552326    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:30:26.552426    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:30:26.552517    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:26.552610    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:30:26.552712    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/id_rsa Username:docker}
	I0815 16:30:26.593353    3848 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 16:30:26.598425    3848 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 16:30:26.598438    3848 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19452-977/.minikube/addons for local assets ...
	I0815 16:30:26.598548    3848 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19452-977/.minikube/files for local assets ...
	I0815 16:30:26.598699    3848 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> 14982.pem in /etc/ssl/certs
	I0815 16:30:26.598705    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> /etc/ssl/certs/14982.pem
	I0815 16:30:26.598861    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 16:30:26.610066    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem --> /etc/ssl/certs/14982.pem (1708 bytes)
	I0815 16:30:26.645456    3848 start.go:296] duration metric: took 93.354607ms for postStartSetup
	I0815 16:30:26.645497    3848 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:30:26.645674    3848 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0815 16:30:26.645688    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:30:26.645776    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:30:26.645850    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:26.645933    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:30:26.646015    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/id_rsa Username:docker}
	I0815 16:30:26.683361    3848 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0815 16:30:26.683423    3848 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0815 16:30:26.737495    3848 fix.go:56] duration metric: took 13.554488062s for fixHost
	I0815 16:30:26.737525    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:30:26.737661    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:30:26.737749    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:26.737848    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:26.737943    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:30:26.738080    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:30:26.738216    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0815 16:30:26.738224    3848 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 16:30:26.796943    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723764627.049155775
	
	I0815 16:30:26.796953    3848 fix.go:216] guest clock: 1723764627.049155775
	I0815 16:30:26.796959    3848 fix.go:229] Guest: 2024-08-15 16:30:27.049155775 -0700 PDT Remote: 2024-08-15 16:30:26.737509 -0700 PDT m=+32.739307986 (delta=311.646775ms)
	I0815 16:30:26.796973    3848 fix.go:200] guest clock delta is within tolerance: 311.646775ms
	I0815 16:30:26.796977    3848 start.go:83] releasing machines lock for "ha-138000-m02", held for 13.613993837s
	I0815 16:30:26.796994    3848 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:30:26.797121    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetIP
	I0815 16:30:26.821561    3848 out.go:177] * Found network options:
	I0815 16:30:26.841357    3848 out.go:177]   - NO_PROXY=192.169.0.5
	W0815 16:30:26.862556    3848 proxy.go:119] fail to check proxy env: Error ip not in block
	I0815 16:30:26.862605    3848 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:30:26.863433    3848 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:30:26.863671    3848 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:30:26.863815    3848 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 16:30:26.863856    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	W0815 16:30:26.863902    3848 proxy.go:119] fail to check proxy env: Error ip not in block
	I0815 16:30:26.863997    3848 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0815 16:30:26.864019    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:30:26.864116    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:30:26.864226    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:30:26.864284    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:26.864479    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:30:26.864535    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:26.864691    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/id_rsa Username:docker}
	I0815 16:30:26.864752    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:30:26.864886    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/id_rsa Username:docker}
	W0815 16:30:26.897510    3848 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 16:30:26.897576    3848 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 16:30:26.944949    3848 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 16:30:26.944964    3848 start.go:495] detecting cgroup driver to use...
	I0815 16:30:26.945031    3848 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 16:30:26.959965    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0815 16:30:26.969052    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0815 16:30:26.977789    3848 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0815 16:30:26.977840    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0815 16:30:26.986870    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 16:30:26.995871    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0815 16:30:27.004811    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 16:30:27.013722    3848 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 16:30:27.022692    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0815 16:30:27.031569    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0815 16:30:27.040462    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0815 16:30:27.049386    3848 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 16:30:27.057419    3848 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 16:30:27.065508    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:30:27.164154    3848 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0815 16:30:27.181165    3848 start.go:495] detecting cgroup driver to use...
	I0815 16:30:27.181250    3848 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0815 16:30:27.192595    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 16:30:27.203037    3848 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 16:30:27.216573    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 16:30:27.228211    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0815 16:30:27.239268    3848 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0815 16:30:27.258656    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0815 16:30:27.269954    3848 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 16:30:27.284667    3848 ssh_runner.go:195] Run: which cri-dockerd
	I0815 16:30:27.287552    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0815 16:30:27.295653    3848 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0815 16:30:27.309091    3848 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0815 16:30:27.403676    3848 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0815 16:30:27.500434    3848 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0815 16:30:27.500464    3848 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0815 16:30:27.514754    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:30:27.610670    3848 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0815 16:30:29.951174    3848 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.340492876s)
	I0815 16:30:29.951241    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0815 16:30:29.961656    3848 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0815 16:30:29.974207    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0815 16:30:29.984718    3848 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0815 16:30:30.078933    3848 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0815 16:30:30.191991    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:30:30.301187    3848 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0815 16:30:30.314601    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0815 16:30:30.325440    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:30:30.420867    3848 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0815 16:30:30.486340    3848 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0815 16:30:30.486435    3848 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0815 16:30:30.491068    3848 start.go:563] Will wait 60s for crictl version
	I0815 16:30:30.491127    3848 ssh_runner.go:195] Run: which crictl
	I0815 16:30:30.494150    3848 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 16:30:30.523583    3848 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0815 16:30:30.523658    3848 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0815 16:30:30.541608    3848 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0815 16:30:30.598613    3848 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0815 16:30:30.658061    3848 out.go:177]   - env NO_PROXY=192.169.0.5
	I0815 16:30:30.695353    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetIP
	I0815 16:30:30.695714    3848 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0815 16:30:30.700361    3848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 16:30:30.709893    3848 mustload.go:65] Loading cluster: ha-138000
	I0815 16:30:30.710062    3848 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:30:30.710316    3848 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:30:30.710336    3848 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:30:30.719005    3848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52273
	I0815 16:30:30.719360    3848 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:30:30.719741    3848 main.go:141] libmachine: Using API Version  1
	I0815 16:30:30.719750    3848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:30:30.719981    3848 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:30:30.720103    3848 main.go:141] libmachine: (ha-138000) Calling .GetState
	I0815 16:30:30.720187    3848 main.go:141] libmachine: (ha-138000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:30:30.720267    3848 main.go:141] libmachine: (ha-138000) DBG | hyperkit pid from json: 3862
	I0815 16:30:30.721211    3848 host.go:66] Checking if "ha-138000" exists ...
	I0815 16:30:30.721471    3848 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:30:30.721491    3848 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:30:30.729999    3848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52275
	I0815 16:30:30.730336    3848 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:30:30.730678    3848 main.go:141] libmachine: Using API Version  1
	I0815 16:30:30.730693    3848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:30:30.730926    3848 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:30:30.731056    3848 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:30:30.731175    3848 certs.go:68] Setting up /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000 for IP: 192.169.0.6
	I0815 16:30:30.731181    3848 certs.go:194] generating shared ca certs ...
	I0815 16:30:30.731197    3848 certs.go:226] acquiring lock for ca certs: {Name:mka1c88019f6064d2983ac988db71a67aeb65696 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:30:30.731336    3848 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.key
	I0815 16:30:30.731387    3848 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.key
	I0815 16:30:30.731396    3848 certs.go:256] generating profile certs ...
	I0815 16:30:30.731509    3848 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/client.key
	I0815 16:30:30.731595    3848 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.key.5f0053a1
	I0815 16:30:30.731651    3848 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.key
	I0815 16:30:30.731658    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0815 16:30:30.731679    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0815 16:30:30.731700    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0815 16:30:30.731722    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0815 16:30:30.731740    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0815 16:30:30.731768    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0815 16:30:30.731791    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0815 16:30:30.731809    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0815 16:30:30.731883    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498.pem (1338 bytes)
	W0815 16:30:30.731920    3848 certs.go:480] ignoring /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498_empty.pem, impossibly tiny 0 bytes
	I0815 16:30:30.731928    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 16:30:30.731973    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem (1082 bytes)
	I0815 16:30:30.732017    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem (1123 bytes)
	I0815 16:30:30.732045    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem (1679 bytes)
	I0815 16:30:30.732121    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem (1708 bytes)
	I0815 16:30:30.732157    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> /usr/share/ca-certificates/14982.pem
	I0815 16:30:30.732177    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:30:30.732194    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498.pem -> /usr/share/ca-certificates/1498.pem
	I0815 16:30:30.732219    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:30:30.732316    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:30:30.732406    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:30.732529    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:30:30.732609    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/id_rsa Username:docker}
	I0815 16:30:30.763783    3848 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0815 16:30:30.767449    3848 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0815 16:30:30.776129    3848 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0815 16:30:30.779163    3848 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0815 16:30:30.787730    3848 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0815 16:30:30.791082    3848 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0815 16:30:30.799754    3848 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0815 16:30:30.802809    3848 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0815 16:30:30.811618    3848 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0815 16:30:30.814650    3848 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0815 16:30:30.822963    3848 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0815 16:30:30.826004    3848 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0815 16:30:30.834906    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 16:30:30.854912    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 16:30:30.874577    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 16:30:30.894388    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 16:30:30.914413    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0815 16:30:30.933887    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 16:30:30.953772    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 16:30:30.973419    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 16:30:30.992862    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem --> /usr/share/ca-certificates/14982.pem (1708 bytes)
	I0815 16:30:31.012391    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 16:30:31.031916    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498.pem --> /usr/share/ca-certificates/1498.pem (1338 bytes)
	I0815 16:30:31.051694    3848 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0815 16:30:31.065167    3848 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0815 16:30:31.078573    3848 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0815 16:30:31.091997    3848 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0815 16:30:31.105622    3848 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0815 16:30:31.119143    3848 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0815 16:30:31.132670    3848 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0815 16:30:31.146406    3848 ssh_runner.go:195] Run: openssl version
	I0815 16:30:31.150444    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 16:30:31.158651    3848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:30:31.162017    3848 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:05 /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:30:31.162055    3848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:30:31.166191    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 16:30:31.174561    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1498.pem && ln -fs /usr/share/ca-certificates/1498.pem /etc/ssl/certs/1498.pem"
	I0815 16:30:31.182745    3848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1498.pem
	I0815 16:30:31.186223    3848 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 23:14 /usr/share/ca-certificates/1498.pem
	I0815 16:30:31.186262    3848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1498.pem
	I0815 16:30:31.190437    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1498.pem /etc/ssl/certs/51391683.0"
	I0815 16:30:31.198642    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14982.pem && ln -fs /usr/share/ca-certificates/14982.pem /etc/ssl/certs/14982.pem"
	I0815 16:30:31.207129    3848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14982.pem
	I0815 16:30:31.210527    3848 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 23:14 /usr/share/ca-certificates/14982.pem
	I0815 16:30:31.210565    3848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14982.pem
	I0815 16:30:31.214780    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14982.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 16:30:31.223055    3848 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 16:30:31.226404    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 16:30:31.230624    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 16:30:31.234964    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 16:30:31.239281    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 16:30:31.243508    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 16:30:31.247740    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 16:30:31.251885    3848 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.31.0 docker true true} ...
	I0815 16:30:31.251948    3848 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-138000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-138000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 16:30:31.251968    3848 kube-vip.go:115] generating kube-vip config ...
	I0815 16:30:31.251997    3848 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0815 16:30:31.264157    3848 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0815 16:30:31.264200    3848 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0815 16:30:31.264247    3848 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 16:30:31.272799    3848 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 16:30:31.272844    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0815 16:30:31.280999    3848 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0815 16:30:31.294195    3848 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 16:30:31.307421    3848 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0815 16:30:31.321201    3848 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0815 16:30:31.324137    3848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 16:30:31.334188    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:30:31.429450    3848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 16:30:31.443961    3848 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 16:30:31.444161    3848 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:30:31.465375    3848 out.go:177] * Verifying Kubernetes components...
	I0815 16:30:31.507025    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:30:31.625968    3848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 16:30:31.645410    3848 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19452-977/kubeconfig
	I0815 16:30:31.645610    3848 kapi.go:59] client config for ha-138000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/client.key", CAFile:"/Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, U
serAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x3a6cf60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0815 16:30:31.645648    3848 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0815 16:30:31.645835    3848 node_ready.go:35] waiting up to 6m0s for node "ha-138000-m02" to be "Ready" ...
	I0815 16:30:31.645920    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:31.645925    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:31.645933    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:31.645936    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:40.053028    3848 round_trippers.go:574] Response Status: 200 OK in 8407 milliseconds
	I0815 16:30:40.053934    3848 node_ready.go:49] node "ha-138000-m02" has status "Ready":"True"
	I0815 16:30:40.053949    3848 node_ready.go:38] duration metric: took 8.408123647s for node "ha-138000-m02" to be "Ready" ...
	I0815 16:30:40.053959    3848 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 16:30:40.053997    3848 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0815 16:30:40.054008    3848 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0815 16:30:40.054051    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0815 16:30:40.054057    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:40.054064    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:40.054066    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:40.076049    3848 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0815 16:30:40.083485    3848 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-dmgt5" in "kube-system" namespace to be "Ready" ...
	I0815 16:30:40.083552    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-dmgt5
	I0815 16:30:40.083559    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:40.083565    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:40.083569    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:40.090478    3848 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0815 16:30:40.091010    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:40.091019    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:40.091025    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:40.091028    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:40.094713    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:40.095017    3848 pod_ready.go:93] pod "coredns-6f6b679f8f-dmgt5" in "kube-system" namespace has status "Ready":"True"
	I0815 16:30:40.095031    3848 pod_ready.go:82] duration metric: took 11.52447ms for pod "coredns-6f6b679f8f-dmgt5" in "kube-system" namespace to be "Ready" ...
	I0815 16:30:40.095040    3848 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-zc8jj" in "kube-system" namespace to be "Ready" ...
	I0815 16:30:40.095087    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-zc8jj
	I0815 16:30:40.095094    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:40.095102    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:40.095107    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:40.101746    3848 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0815 16:30:40.102483    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:40.102492    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:40.102500    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:40.102503    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:40.105983    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:40.106569    3848 pod_ready.go:93] pod "coredns-6f6b679f8f-zc8jj" in "kube-system" namespace has status "Ready":"True"
	I0815 16:30:40.106587    3848 pod_ready.go:82] duration metric: took 11.533246ms for pod "coredns-6f6b679f8f-zc8jj" in "kube-system" namespace to be "Ready" ...
	I0815 16:30:40.106595    3848 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:30:40.106638    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000
	I0815 16:30:40.106644    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:40.106651    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:40.106654    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:40.110887    3848 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 16:30:40.111881    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:40.111893    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:40.111902    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:40.111907    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:40.114794    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:40.115181    3848 pod_ready.go:93] pod "etcd-ha-138000" in "kube-system" namespace has status "Ready":"True"
	I0815 16:30:40.115194    3848 pod_ready.go:82] duration metric: took 8.594007ms for pod "etcd-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:30:40.115201    3848 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:30:40.115242    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m02
	I0815 16:30:40.115247    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:40.115252    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:40.115256    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:40.121257    3848 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 16:30:40.121684    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:40.121694    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:40.121704    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:40.121710    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:40.125990    3848 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 16:30:40.126507    3848 pod_ready.go:93] pod "etcd-ha-138000-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 16:30:40.126520    3848 pod_ready.go:82] duration metric: took 11.312949ms for pod "etcd-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:30:40.126528    3848 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:30:40.126573    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:30:40.126579    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:40.126585    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:40.126589    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:40.129916    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:40.254208    3848 request.go:632] Waited for 123.846339ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:30:40.254247    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:30:40.254252    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:40.254262    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:40.254299    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:40.258157    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:40.258510    3848 pod_ready.go:93] pod "etcd-ha-138000-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 16:30:40.258520    3848 pod_ready.go:82] duration metric: took 131.98589ms for pod "etcd-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:30:40.258532    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:30:40.454350    3848 request.go:632] Waited for 195.778452ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000
	I0815 16:30:40.454424    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000
	I0815 16:30:40.454430    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:40.454436    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:40.454441    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:40.457270    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:40.654210    3848 request.go:632] Waited for 196.49648ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:40.654247    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:40.654254    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:40.654300    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:40.654306    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:40.662420    3848 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0815 16:30:40.662780    3848 pod_ready.go:98] node "ha-138000" hosting pod "kube-apiserver-ha-138000" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-138000" has status "Ready":"False"
	I0815 16:30:40.662798    3848 pod_ready.go:82] duration metric: took 404.260054ms for pod "kube-apiserver-ha-138000" in "kube-system" namespace to be "Ready" ...
	E0815 16:30:40.662809    3848 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-138000" hosting pod "kube-apiserver-ha-138000" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-138000" has status "Ready":"False"
	I0815 16:30:40.662819    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:30:40.854147    3848 request.go:632] Waited for 191.277341ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:40.854226    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:40.854232    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:40.854238    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:40.854243    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:40.859631    3848 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 16:30:41.054463    3848 request.go:632] Waited for 194.266573ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:41.054497    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:41.054501    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:41.054509    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:41.054513    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:41.058210    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:41.254872    3848 request.go:632] Waited for 91.867207ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:41.254917    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:41.254966    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:41.254978    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:41.254982    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:41.258343    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:41.455877    3848 request.go:632] Waited for 196.977249ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:41.455912    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:41.455919    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:41.455925    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:41.455931    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:41.457855    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:41.664056    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:41.664082    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:41.664093    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:41.664100    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:41.667876    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:41.854208    3848 request.go:632] Waited for 185.493412ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:41.854247    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:41.854253    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:41.854260    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:41.854264    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:41.856823    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:42.163578    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:42.163664    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:42.163680    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:42.163716    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:42.167135    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:42.254205    3848 request.go:632] Waited for 86.267935ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:42.254261    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:42.254269    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:42.254286    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:42.254324    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:42.257709    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:42.664326    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:42.664344    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:42.664353    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:42.664357    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:42.666960    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:42.667548    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:42.667555    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:42.667561    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:42.667564    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:42.669222    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:42.669539    3848 pod_ready.go:103] pod "kube-apiserver-ha-138000-m02" in "kube-system" namespace has status "Ready":"False"
	I0815 16:30:43.163236    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:43.163273    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:43.163281    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:43.163286    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:43.165588    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:43.166081    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:43.166088    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:43.166094    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:43.166097    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:43.167727    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:43.663181    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:43.663266    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:43.663274    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:43.663277    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:43.665851    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:43.666288    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:43.666295    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:43.666301    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:43.666305    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:43.669495    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:44.163768    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:44.163782    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:44.163788    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:44.163800    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:44.166284    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:44.166820    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:44.166828    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:44.166834    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:44.166853    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:44.169173    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:44.663006    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:44.663018    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:44.663023    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:44.663025    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:44.665460    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:44.666145    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:44.666152    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:44.666158    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:44.666162    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:44.668246    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:45.164214    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:45.164237    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:45.164314    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:45.164325    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:45.167819    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:45.168514    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:45.168521    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:45.168528    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:45.168531    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:45.170434    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:45.170836    3848 pod_ready.go:103] pod "kube-apiserver-ha-138000-m02" in "kube-system" namespace has status "Ready":"False"
	I0815 16:30:45.665030    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:45.665056    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:45.665068    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:45.665073    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:45.668540    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:45.669128    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:45.669139    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:45.669148    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:45.669152    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:45.671055    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:46.163033    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:46.163095    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:46.163108    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:46.163116    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:46.166371    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:46.166786    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:46.166793    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:46.166799    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:46.166803    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:46.168600    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:46.663767    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:46.663791    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:46.663803    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:46.663814    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:46.667030    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:46.667614    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:46.667625    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:46.667633    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:46.667637    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:46.669233    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:47.163455    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:47.163469    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:47.163475    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:47.163480    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:47.167195    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:47.167557    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:47.167565    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:47.167571    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:47.167576    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:47.170814    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:47.171266    3848 pod_ready.go:103] pod "kube-apiserver-ha-138000-m02" in "kube-system" namespace has status "Ready":"False"
	I0815 16:30:47.663794    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:47.663820    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:47.663831    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:47.663839    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:47.667639    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:47.668283    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:47.668291    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:47.668297    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:47.668301    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:47.669950    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:48.164538    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:48.164559    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:48.164581    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:48.164603    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:48.168530    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:48.169233    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:48.169241    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:48.169248    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:48.169251    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:48.171274    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:48.663780    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:48.663804    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:48.663815    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:48.663821    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:48.667278    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:48.667837    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:48.667845    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:48.667851    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:48.667856    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:48.669518    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:49.165064    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:49.165087    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:49.165098    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:49.165104    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:49.168508    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:49.169206    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:49.169217    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:49.169225    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:49.169230    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:49.171198    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:49.171795    3848 pod_ready.go:103] pod "kube-apiserver-ha-138000-m02" in "kube-system" namespace has status "Ready":"False"
	I0815 16:30:49.663424    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:49.663448    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:49.663459    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:49.663467    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:49.667225    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:49.667697    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:49.667705    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:49.667711    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:49.667714    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:49.669376    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:50.164125    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:50.164149    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:50.164161    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:50.164166    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:50.167285    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:50.167810    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:50.167817    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:50.167823    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:50.167827    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:50.171799    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:50.663500    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:50.663525    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:50.663537    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:50.663543    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:50.667177    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:50.667713    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:50.667720    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:50.667726    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:50.667730    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:50.669352    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:51.164194    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:51.164219    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:51.164237    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:51.164244    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:51.167593    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:51.168246    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:51.168257    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:51.168264    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:51.168270    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:51.170524    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:51.664614    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:51.664638    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:51.664657    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:51.664665    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:51.668046    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:51.668566    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:51.668577    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:51.668585    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:51.668607    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:51.671534    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:51.671914    3848 pod_ready.go:103] pod "kube-apiserver-ha-138000-m02" in "kube-system" namespace has status "Ready":"False"
	I0815 16:30:52.164065    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:52.164089    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:52.164101    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:52.164110    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:52.167433    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:52.167935    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:52.167943    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:52.167948    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:52.167952    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:52.169540    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:52.169859    3848 pod_ready.go:93] pod "kube-apiserver-ha-138000-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 16:30:52.169869    3848 pod_ready.go:82] duration metric: took 11.507082407s for pod "kube-apiserver-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:30:52.169876    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:30:52.169910    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m03
	I0815 16:30:52.169915    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:52.169920    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:52.169923    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:52.171715    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:52.172141    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:30:52.172148    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:52.172154    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:52.172158    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:52.173532    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:52.173854    3848 pod_ready.go:93] pod "kube-apiserver-ha-138000-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 16:30:52.173863    3848 pod_ready.go:82] duration metric: took 3.981675ms for pod "kube-apiserver-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:30:52.173872    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:30:52.173900    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:30:52.173905    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:52.173911    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:52.173915    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:52.175518    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:52.175919    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:52.175926    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:52.175932    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:52.175936    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:52.177444    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:52.675197    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:30:52.675270    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:52.675284    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:52.675316    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:52.678186    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:52.678703    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:52.678711    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:52.678716    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:52.678719    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:52.680216    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:53.174971    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:30:53.174985    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:53.174994    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:53.175001    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:53.177452    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:53.177896    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:53.177903    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:53.177909    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:53.177912    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:53.179480    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:53.674788    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:30:53.674799    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:53.674806    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:53.674809    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:53.676873    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:53.677297    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:53.677305    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:53.677311    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:53.677315    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:53.678908    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:54.175897    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:30:54.175920    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:54.175937    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:54.175942    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:54.180021    3848 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 16:30:54.180479    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:54.180486    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:54.180492    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:54.180495    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:54.182351    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:54.182698    3848 pod_ready.go:103] pod "kube-controller-manager-ha-138000" in "kube-system" namespace has status "Ready":"False"
	I0815 16:30:54.674099    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:30:54.674113    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:54.674122    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:54.674126    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:54.676508    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:54.676959    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:54.676967    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:54.676973    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:54.676977    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:54.678531    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:55.174102    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:30:55.174117    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:55.174124    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:55.174129    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:55.176616    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:55.176978    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:55.176985    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:55.176991    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:55.176995    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:55.178804    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:55.675041    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:30:55.675073    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:55.675080    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:55.675083    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:55.677155    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:55.677606    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:55.677614    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:55.677620    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:55.677623    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:55.679257    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:56.174332    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:30:56.174347    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:56.174355    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:56.174360    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:56.176768    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:56.177182    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:56.177189    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:56.177194    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:56.177199    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:56.178739    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:56.674623    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:30:56.674644    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:56.674656    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:56.674663    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:56.678017    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:56.678729    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:56.678740    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:56.678748    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:56.678753    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:56.680396    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:56.680664    3848 pod_ready.go:103] pod "kube-controller-manager-ha-138000" in "kube-system" namespace has status "Ready":"False"
	I0815 16:30:57.174239    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:30:57.174259    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:57.174270    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:57.174276    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:57.176913    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:57.177317    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:57.177325    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:57.177330    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:57.177333    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:57.179089    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:57.674639    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:30:57.674650    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:57.674656    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:57.674660    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:57.676502    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:57.676984    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:57.676992    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:57.676997    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:57.677001    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:57.678477    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:58.174097    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:30:58.174117    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:58.174128    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:58.174136    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:58.177182    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:58.177563    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:58.177571    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:58.177575    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:58.177579    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:58.179304    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:58.675031    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:30:58.675045    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:58.675051    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:58.675055    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:58.680738    3848 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 16:30:58.682155    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:58.682163    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:58.682168    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:58.682171    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:58.686617    3848 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 16:30:58.686985    3848 pod_ready.go:103] pod "kube-controller-manager-ha-138000" in "kube-system" namespace has status "Ready":"False"
	I0815 16:30:59.174980    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:30:59.175006    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:59.175018    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:59.175023    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:59.178731    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:59.179314    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:59.179322    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:59.179328    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:59.179332    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:59.181206    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:59.674657    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:30:59.674670    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:59.674676    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:59.674679    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:59.676675    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:59.677055    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:59.677062    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:59.677069    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:59.677074    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:59.679271    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:00.174152    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:00.174175    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:00.174187    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:00.174194    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:00.177768    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:00.178234    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:00.178241    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:00.178247    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:00.178251    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:00.179906    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:00.675229    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:00.675240    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:00.675246    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:00.675250    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:00.677503    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:00.677966    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:00.677974    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:00.677979    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:00.677983    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:00.681462    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:01.174237    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:01.174258    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:01.174271    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:01.174278    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:01.177221    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:01.177958    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:01.177967    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:01.177973    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:01.177987    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:01.179870    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:01.180167    3848 pod_ready.go:103] pod "kube-controller-manager-ha-138000" in "kube-system" namespace has status "Ready":"False"
	I0815 16:31:01.674059    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:01.674071    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:01.674078    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:01.674082    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:01.678596    3848 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 16:31:01.679166    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:01.679175    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:01.679183    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:01.679203    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:01.681866    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:02.174721    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:02.174744    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:02.174757    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:02.174765    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:02.177936    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:02.178578    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:02.178585    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:02.178590    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:02.178593    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:02.180199    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:02.674480    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:02.674492    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:02.674498    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:02.674501    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:02.676574    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:02.677121    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:02.677129    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:02.677135    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:02.677138    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:02.678870    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:03.174993    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:03.175017    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:03.175028    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:03.175034    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:03.178103    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:03.178765    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:03.178773    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:03.178780    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:03.178783    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:03.180384    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:03.180717    3848 pod_ready.go:103] pod "kube-controller-manager-ha-138000" in "kube-system" namespace has status "Ready":"False"
	I0815 16:31:03.675885    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:03.675928    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:03.675935    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:03.675938    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:03.681610    3848 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 16:31:03.682165    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:03.682172    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:03.682178    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:03.682187    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:03.685681    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:04.173973    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:04.173985    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:04.173993    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:04.173996    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:04.176170    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:04.176622    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:04.176629    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:04.176635    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:04.176638    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:04.178918    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:04.674029    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:04.674041    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:04.674047    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:04.674051    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:04.676085    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:04.676616    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:04.676624    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:04.676629    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:04.676633    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:04.678653    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:05.174670    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:05.174682    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:05.174692    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:05.174696    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:05.176894    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:05.177444    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:05.177452    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:05.177458    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:05.177462    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:05.179988    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:05.673967    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:05.673984    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:05.673991    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:05.674005    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:05.676133    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:05.676616    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:05.676623    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:05.676629    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:05.676632    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:05.678220    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:05.678588    3848 pod_ready.go:103] pod "kube-controller-manager-ha-138000" in "kube-system" namespace has status "Ready":"False"
	I0815 16:31:06.174028    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:06.174040    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:06.174046    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:06.174049    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:06.176193    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:06.176556    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:06.176564    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:06.176570    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:06.176574    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:06.178240    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:06.674003    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:06.674018    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:06.674028    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:06.674032    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:06.676638    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:06.677110    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:06.677118    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:06.677124    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:06.677127    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:06.680025    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:07.175462    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:07.175477    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:07.175485    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:07.175489    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:07.178337    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:07.178886    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:07.178895    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:07.178900    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:07.178904    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:07.181117    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:07.674103    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:07.674115    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:07.674121    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:07.674125    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:07.676375    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:07.676766    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:07.676774    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:07.676780    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:07.676783    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:07.678622    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:07.678897    3848 pod_ready.go:103] pod "kube-controller-manager-ha-138000" in "kube-system" namespace has status "Ready":"False"
	I0815 16:31:08.174128    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:08.174151    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:08.174166    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:08.174203    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:08.177482    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:08.177896    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:08.177904    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:08.177909    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:08.177914    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:08.179348    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:08.674105    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:08.674132    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:08.674180    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:08.674191    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:08.677562    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:08.677981    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:08.677989    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:08.677994    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:08.677997    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:08.679564    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:09.174687    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:09.174712    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:09.174723    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:09.174728    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:09.177711    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:09.178141    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:09.178149    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:09.178155    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:09.178160    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:09.179715    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:09.675793    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:09.675810    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:09.675860    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:09.675867    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:09.681370    3848 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 16:31:09.681707    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:09.681714    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:09.681720    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:09.681724    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:09.683407    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:09.683668    3848 pod_ready.go:103] pod "kube-controller-manager-ha-138000" in "kube-system" namespace has status "Ready":"False"
	I0815 16:31:10.174082    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:10.174096    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:10.174104    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:10.174111    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:10.176432    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:10.176901    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:10.176909    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:10.176916    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:10.176919    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:10.178547    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:10.674143    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:10.674158    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:10.674166    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:10.674171    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:10.676827    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:10.677366    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:10.677374    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:10.677379    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:10.677398    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:10.679369    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:11.174015    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:11.174031    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:11.174039    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:11.174043    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:11.176194    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:11.176646    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:11.176655    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:11.176661    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:11.176664    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:11.178182    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:11.674088    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:11.674100    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:11.674107    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:11.674111    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:11.676722    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:11.677179    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:11.677186    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:11.677192    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:11.677197    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:11.679318    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:12.173967    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:12.173978    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:12.173983    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:12.173986    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:12.176395    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:12.176784    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:12.176792    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:12.176797    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:12.176799    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:12.178613    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:12.178965    3848 pod_ready.go:103] pod "kube-controller-manager-ha-138000" in "kube-system" namespace has status "Ready":"False"
	I0815 16:31:12.674752    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:12.674764    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:12.674771    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:12.674774    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:12.676796    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:12.677237    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:12.677244    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:12.677249    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:12.677254    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:12.678824    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:13.174235    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:13.174257    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:13.174269    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:13.174275    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:13.177507    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:13.177937    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:13.177945    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:13.177950    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:13.177958    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:13.179998    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:13.674842    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:13.674865    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:13.674920    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:13.674927    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:13.677347    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:13.677743    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:13.677750    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:13.677756    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:13.677760    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:13.679598    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:14.174511    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:14.174531    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:14.174543    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:14.174548    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:14.177242    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:14.177787    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:14.177794    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:14.177799    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:14.177804    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:14.179505    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:14.179846    3848 pod_ready.go:103] pod "kube-controller-manager-ha-138000" in "kube-system" namespace has status "Ready":"False"
	I0815 16:31:14.674978    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:14.674991    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:14.675000    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:14.675005    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:14.677126    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:14.677577    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:14.677584    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:14.677589    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:14.677592    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:14.679150    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:15.174111    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:15.174190    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:15.174206    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:15.174214    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:15.178180    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:15.178702    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:15.178709    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:15.178716    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:15.178720    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:15.180563    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:15.674161    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:15.674175    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:15.674181    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:15.674184    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:15.676320    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:15.676809    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:15.676817    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:15.676822    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:15.676826    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:15.678731    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:15.679179    3848 pod_ready.go:93] pod "kube-controller-manager-ha-138000" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:15.679188    3848 pod_ready.go:82] duration metric: took 23.505390371s for pod "kube-controller-manager-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:15.679194    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:15.679234    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000-m02
	I0815 16:31:15.679239    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:15.679244    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:15.679249    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:15.680973    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:15.681373    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:31:15.681379    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:15.681385    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:15.681389    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:15.683105    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:15.683478    3848 pod_ready.go:93] pod "kube-controller-manager-ha-138000-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:15.683487    3848 pod_ready.go:82] duration metric: took 4.286435ms for pod "kube-controller-manager-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:15.683493    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:15.683528    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000-m03
	I0815 16:31:15.683532    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:15.683538    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:15.683543    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:15.685040    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:15.685461    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:15.685469    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:15.685474    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:15.685478    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:15.687218    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:15.687628    3848 pod_ready.go:93] pod "kube-controller-manager-ha-138000-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:15.687636    3848 pod_ready.go:82] duration metric: took 4.137303ms for pod "kube-controller-manager-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:15.687642    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cznkn" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:15.687674    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cznkn
	I0815 16:31:15.687679    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:15.687685    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:15.687690    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:15.689397    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:15.689764    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:15.689771    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:15.689776    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:15.689787    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:15.691449    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:15.691750    3848 pod_ready.go:93] pod "kube-proxy-cznkn" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:15.691759    3848 pod_ready.go:82] duration metric: took 4.111581ms for pod "kube-proxy-cznkn" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:15.691765    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kxghx" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:15.691804    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kxghx
	I0815 16:31:15.691809    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:15.691815    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:15.691819    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:15.693452    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:15.693908    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:15.693915    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:15.693921    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:15.693924    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:15.695674    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:15.695946    3848 pod_ready.go:93] pod "kube-proxy-kxghx" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:15.695955    3848 pod_ready.go:82] duration metric: took 4.185821ms for pod "kube-proxy-kxghx" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:15.695961    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qpth7" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:15.875071    3848 request.go:632] Waited for 179.069493ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qpth7
	I0815 16:31:15.875187    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qpth7
	I0815 16:31:15.875199    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:15.875210    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:15.875216    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:15.877997    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:16.074238    3848 request.go:632] Waited for 195.764515ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m04
	I0815 16:31:16.074336    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m04
	I0815 16:31:16.074348    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:16.074360    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:16.074366    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:16.076828    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:16.077164    3848 pod_ready.go:93] pod "kube-proxy-qpth7" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:16.077173    3848 pod_ready.go:82] duration metric: took 381.20933ms for pod "kube-proxy-qpth7" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:16.077180    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tf79g" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:16.275150    3848 request.go:632] Waited for 197.922377ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tf79g
	I0815 16:31:16.275315    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tf79g
	I0815 16:31:16.275333    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:16.275348    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:16.275355    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:16.279230    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:16.474637    3848 request.go:632] Waited for 194.734989ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:31:16.474686    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:31:16.474694    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:16.474748    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:16.474760    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:16.477402    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:16.477913    3848 pod_ready.go:93] pod "kube-proxy-tf79g" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:16.477922    3848 pod_ready.go:82] duration metric: took 400.738709ms for pod "kube-proxy-tf79g" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:16.477928    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:16.674642    3848 request.go:632] Waited for 196.671207ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000
	I0815 16:31:16.674730    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000
	I0815 16:31:16.674740    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:16.674751    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:16.674791    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:16.677902    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:16.874216    3848 request.go:632] Waited for 195.903155ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:16.874296    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:16.874307    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:16.874318    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:16.874325    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:16.877076    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:16.877354    3848 pod_ready.go:93] pod "kube-scheduler-ha-138000" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:16.877362    3848 pod_ready.go:82] duration metric: took 399.431009ms for pod "kube-scheduler-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:16.877369    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:17.075600    3848 request.go:632] Waited for 198.191772ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000-m02
	I0815 16:31:17.075685    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000-m02
	I0815 16:31:17.075692    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:17.075697    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:17.075701    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:17.077601    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:17.275453    3848 request.go:632] Waited for 196.87369ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:31:17.275508    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:31:17.275516    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:17.275528    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:17.275536    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:17.278217    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:17.278748    3848 pod_ready.go:93] pod "kube-scheduler-ha-138000-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:17.278761    3848 pod_ready.go:82] duration metric: took 401.387065ms for pod "kube-scheduler-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:17.278778    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:17.474217    3848 request.go:632] Waited for 195.389302ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000-m03
	I0815 16:31:17.474330    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000-m03
	I0815 16:31:17.474342    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:17.474353    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:17.474361    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:17.477689    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:17.675623    3848 request.go:632] Waited for 197.469909ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:17.675688    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:17.675697    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:17.675705    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:17.675712    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:17.677994    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:17.678325    3848 pod_ready.go:93] pod "kube-scheduler-ha-138000-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:17.678335    3848 pod_ready.go:82] duration metric: took 399.551961ms for pod "kube-scheduler-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:17.678343    3848 pod_ready.go:39] duration metric: took 37.624501402s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 16:31:17.678361    3848 api_server.go:52] waiting for apiserver process to appear ...
	I0815 16:31:17.678422    3848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 16:31:17.692897    3848 api_server.go:72] duration metric: took 46.249064527s to wait for apiserver process to appear ...
	I0815 16:31:17.692911    3848 api_server.go:88] waiting for apiserver healthz status ...
	I0815 16:31:17.692928    3848 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0815 16:31:17.695957    3848 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0815 16:31:17.695990    3848 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0815 16:31:17.695994    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:17.696000    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:17.696004    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:17.696581    3848 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0815 16:31:17.696664    3848 api_server.go:141] control plane version: v1.31.0
	I0815 16:31:17.696676    3848 api_server.go:131] duration metric: took 3.760735ms to wait for apiserver health ...
	I0815 16:31:17.696684    3848 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 16:31:17.874475    3848 request.go:632] Waited for 177.745811ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0815 16:31:17.874542    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0815 16:31:17.874551    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:17.874608    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:17.874617    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:17.879453    3848 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 16:31:17.884757    3848 system_pods.go:59] 26 kube-system pods found
	I0815 16:31:17.884772    3848 system_pods.go:61] "coredns-6f6b679f8f-dmgt5" [47d73953-ec2c-4f17-b2b8-d6a9b5e5a316] Running
	I0815 16:31:17.884778    3848 system_pods.go:61] "coredns-6f6b679f8f-zc8jj" [b4a9df39-b09d-4bc3-97f6-b3176ff8e842] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 16:31:17.884783    3848 system_pods.go:61] "etcd-ha-138000" [c2e7e508-a157-4581-a446-2f48e51c0c16] Running
	I0815 16:31:17.884787    3848 system_pods.go:61] "etcd-ha-138000-m02" [4f371d3f-821b-4eae-ab52-5b75d90acd67] Running
	I0815 16:31:17.884791    3848 system_pods.go:61] "etcd-ha-138000-m03" [ebdd90b3-c3b2-44c2-a411-4f6afa67a455] Running
	I0815 16:31:17.884793    3848 system_pods.go:61] "kindnet-77dc6" [24bfc069-9446-43a7-aa61-308c1a62a20e] Running
	I0815 16:31:17.884796    3848 system_pods.go:61] "kindnet-dsvxt" [7645852b-8535-4cb4-8324-15c9b55176e3] Running
	I0815 16:31:17.884798    3848 system_pods.go:61] "kindnet-m887r" [ba31865b-c712-47a8-9fd8-06420270ac8b] Running
	I0815 16:31:17.884801    3848 system_pods.go:61] "kindnet-z6mnx" [fc6211a0-df99-4350-90ba-a18f74bd1bfc] Running
	I0815 16:31:17.884804    3848 system_pods.go:61] "kube-apiserver-ha-138000" [bbc684f7-d8e2-44f2-987a-df9d05cf54fc] Running
	I0815 16:31:17.884806    3848 system_pods.go:61] "kube-apiserver-ha-138000-m02" [c30e129b-f475-4131-a25e-7eeecb39cbea] Running
	I0815 16:31:17.884809    3848 system_pods.go:61] "kube-apiserver-ha-138000-m03" [90304f02-10f2-446d-b731-674df5602401] Running
	I0815 16:31:17.884811    3848 system_pods.go:61] "kube-controller-manager-ha-138000" [1d4148d1-9798-4662-91de-d9a7dae634e2] Running
	I0815 16:31:17.884814    3848 system_pods.go:61] "kube-controller-manager-ha-138000-m02" [ad693dae-cb0a-46ca-955b-33a9465d911a] Running
	I0815 16:31:17.884816    3848 system_pods.go:61] "kube-controller-manager-ha-138000-m03" [11aa509a-dade-4b66-b157-4d4cccb7f349] Running
	I0815 16:31:17.884819    3848 system_pods.go:61] "kube-proxy-cznkn" [61cd1a9a-80fb-4d0e-a4dd-f5ed2d6ddc0f] Running
	I0815 16:31:17.884821    3848 system_pods.go:61] "kube-proxy-kxghx" [27f61dcc-2812-4038-af2a-aa9f46033869] Running
	I0815 16:31:17.884823    3848 system_pods.go:61] "kube-proxy-qpth7" [a343f80b-0fe9-4c88-9782-5fbf9a6170d1] Running
	I0815 16:31:17.884826    3848 system_pods.go:61] "kube-proxy-tf79g" [ef8a2aeb-55c4-436e-a306-da8b93c9707f] Running
	I0815 16:31:17.884830    3848 system_pods.go:61] "kube-scheduler-ha-138000" [ee1d3eb0-596d-4bf0-bed4-dbd38b286e38] Running
	I0815 16:31:17.884832    3848 system_pods.go:61] "kube-scheduler-ha-138000-m02" [1cc29309-49ad-427e-862a-758cc5711eef] Running
	I0815 16:31:17.884835    3848 system_pods.go:61] "kube-scheduler-ha-138000-m03" [2e3efb3c-6903-4306-adec-d4a47b3a56bd] Running
	I0815 16:31:17.884837    3848 system_pods.go:61] "kube-vip-ha-138000" [1b8c66e5-ffaa-4d4b-a220-8eb664b79d91] Running
	I0815 16:31:17.884839    3848 system_pods.go:61] "kube-vip-ha-138000-m02" [a0fe2ba6-1ace-456f-ab2e-15c43303dcad] Running
	I0815 16:31:17.884841    3848 system_pods.go:61] "kube-vip-ha-138000-m03" [44fe2bd5-bd48-4f86-b38a-7e4b3554174c] Running
	I0815 16:31:17.884844    3848 system_pods.go:61] "storage-provisioner" [35f3a9d7-cb68-4b10-82a2-1bd72d8d1aa6] Running
	I0815 16:31:17.884847    3848 system_pods.go:74] duration metric: took 188.159351ms to wait for pod list to return data ...
	I0815 16:31:17.884852    3848 default_sa.go:34] waiting for default service account to be created ...
	I0815 16:31:18.074641    3848 request.go:632] Waited for 189.738485ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0815 16:31:18.074728    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0815 16:31:18.074738    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:18.074749    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:18.074756    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:18.078635    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:18.078759    3848 default_sa.go:45] found service account: "default"
	I0815 16:31:18.078768    3848 default_sa.go:55] duration metric: took 193.912663ms for default service account to be created ...
	I0815 16:31:18.078774    3848 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 16:31:18.274230    3848 request.go:632] Waited for 195.413402ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0815 16:31:18.274340    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0815 16:31:18.274351    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:18.274361    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:18.274369    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:18.279297    3848 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 16:31:18.284504    3848 system_pods.go:86] 26 kube-system pods found
	I0815 16:31:18.284515    3848 system_pods.go:89] "coredns-6f6b679f8f-dmgt5" [47d73953-ec2c-4f17-b2b8-d6a9b5e5a316] Running
	I0815 16:31:18.284521    3848 system_pods.go:89] "coredns-6f6b679f8f-zc8jj" [b4a9df39-b09d-4bc3-97f6-b3176ff8e842] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 16:31:18.284525    3848 system_pods.go:89] "etcd-ha-138000" [c2e7e508-a157-4581-a446-2f48e51c0c16] Running
	I0815 16:31:18.284530    3848 system_pods.go:89] "etcd-ha-138000-m02" [4f371d3f-821b-4eae-ab52-5b75d90acd67] Running
	I0815 16:31:18.284534    3848 system_pods.go:89] "etcd-ha-138000-m03" [ebdd90b3-c3b2-44c2-a411-4f6afa67a455] Running
	I0815 16:31:18.284537    3848 system_pods.go:89] "kindnet-77dc6" [24bfc069-9446-43a7-aa61-308c1a62a20e] Running
	I0815 16:31:18.284540    3848 system_pods.go:89] "kindnet-dsvxt" [7645852b-8535-4cb4-8324-15c9b55176e3] Running
	I0815 16:31:18.284543    3848 system_pods.go:89] "kindnet-m887r" [ba31865b-c712-47a8-9fd8-06420270ac8b] Running
	I0815 16:31:18.284545    3848 system_pods.go:89] "kindnet-z6mnx" [fc6211a0-df99-4350-90ba-a18f74bd1bfc] Running
	I0815 16:31:18.284550    3848 system_pods.go:89] "kube-apiserver-ha-138000" [bbc684f7-d8e2-44f2-987a-df9d05cf54fc] Running
	I0815 16:31:18.284554    3848 system_pods.go:89] "kube-apiserver-ha-138000-m02" [c30e129b-f475-4131-a25e-7eeecb39cbea] Running
	I0815 16:31:18.284557    3848 system_pods.go:89] "kube-apiserver-ha-138000-m03" [90304f02-10f2-446d-b731-674df5602401] Running
	I0815 16:31:18.284561    3848 system_pods.go:89] "kube-controller-manager-ha-138000" [1d4148d1-9798-4662-91de-d9a7dae634e2] Running
	I0815 16:31:18.284564    3848 system_pods.go:89] "kube-controller-manager-ha-138000-m02" [ad693dae-cb0a-46ca-955b-33a9465d911a] Running
	I0815 16:31:18.284567    3848 system_pods.go:89] "kube-controller-manager-ha-138000-m03" [11aa509a-dade-4b66-b157-4d4cccb7f349] Running
	I0815 16:31:18.284570    3848 system_pods.go:89] "kube-proxy-cznkn" [61cd1a9a-80fb-4d0e-a4dd-f5ed2d6ddc0f] Running
	I0815 16:31:18.284572    3848 system_pods.go:89] "kube-proxy-kxghx" [27f61dcc-2812-4038-af2a-aa9f46033869] Running
	I0815 16:31:18.284575    3848 system_pods.go:89] "kube-proxy-qpth7" [a343f80b-0fe9-4c88-9782-5fbf9a6170d1] Running
	I0815 16:31:18.284579    3848 system_pods.go:89] "kube-proxy-tf79g" [ef8a2aeb-55c4-436e-a306-da8b93c9707f] Running
	I0815 16:31:18.284582    3848 system_pods.go:89] "kube-scheduler-ha-138000" [ee1d3eb0-596d-4bf0-bed4-dbd38b286e38] Running
	I0815 16:31:18.284586    3848 system_pods.go:89] "kube-scheduler-ha-138000-m02" [1cc29309-49ad-427e-862a-758cc5711eef] Running
	I0815 16:31:18.284588    3848 system_pods.go:89] "kube-scheduler-ha-138000-m03" [2e3efb3c-6903-4306-adec-d4a47b3a56bd] Running
	I0815 16:31:18.284591    3848 system_pods.go:89] "kube-vip-ha-138000" [1b8c66e5-ffaa-4d4b-a220-8eb664b79d91] Running
	I0815 16:31:18.284594    3848 system_pods.go:89] "kube-vip-ha-138000-m02" [a0fe2ba6-1ace-456f-ab2e-15c43303dcad] Running
	I0815 16:31:18.284596    3848 system_pods.go:89] "kube-vip-ha-138000-m03" [44fe2bd5-bd48-4f86-b38a-7e4b3554174c] Running
	I0815 16:31:18.284599    3848 system_pods.go:89] "storage-provisioner" [35f3a9d7-cb68-4b10-82a2-1bd72d8d1aa6] Running
	I0815 16:31:18.284603    3848 system_pods.go:126] duration metric: took 205.826361ms to wait for k8s-apps to be running ...
	I0815 16:31:18.284609    3848 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 16:31:18.284679    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 16:31:18.296708    3848 system_svc.go:56] duration metric: took 12.095446ms WaitForService to wait for kubelet
	I0815 16:31:18.296724    3848 kubeadm.go:582] duration metric: took 46.852894704s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 16:31:18.296736    3848 node_conditions.go:102] verifying NodePressure condition ...
	I0815 16:31:18.474267    3848 request.go:632] Waited for 177.483283ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0815 16:31:18.474322    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0815 16:31:18.474330    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:18.474371    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:18.474392    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:18.477388    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:18.478383    3848 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 16:31:18.478396    3848 node_conditions.go:123] node cpu capacity is 2
	I0815 16:31:18.478405    3848 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 16:31:18.478408    3848 node_conditions.go:123] node cpu capacity is 2
	I0815 16:31:18.478412    3848 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 16:31:18.478415    3848 node_conditions.go:123] node cpu capacity is 2
	I0815 16:31:18.478418    3848 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 16:31:18.478423    3848 node_conditions.go:123] node cpu capacity is 2
	I0815 16:31:18.478427    3848 node_conditions.go:105] duration metric: took 181.688465ms to run NodePressure ...
	I0815 16:31:18.478434    3848 start.go:241] waiting for startup goroutines ...
	I0815 16:31:18.478453    3848 start.go:255] writing updated cluster config ...
	I0815 16:31:18.501967    3848 out.go:201] 
	I0815 16:31:18.522062    3848 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:31:18.522177    3848 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/config.json ...
	I0815 16:31:18.560022    3848 out.go:177] * Starting "ha-138000-m03" control-plane node in "ha-138000" cluster
	I0815 16:31:18.618077    3848 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 16:31:18.618104    3848 cache.go:56] Caching tarball of preloaded images
	I0815 16:31:18.618293    3848 preload.go:172] Found /Users/jenkins/minikube-integration/19452-977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0815 16:31:18.618310    3848 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 16:31:18.618409    3848 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/config.json ...
	I0815 16:31:18.619051    3848 start.go:360] acquireMachinesLock for ha-138000-m03: {Name:mkac8034c8a60617271f2754a03dcaf74fc4fef7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 16:31:18.619147    3848 start.go:364] duration metric: took 77.203µs to acquireMachinesLock for "ha-138000-m03"
	I0815 16:31:18.619166    3848 start.go:96] Skipping create...Using existing machine configuration
	I0815 16:31:18.619174    3848 fix.go:54] fixHost starting: m03
	I0815 16:31:18.619485    3848 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:31:18.619510    3848 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:31:18.628416    3848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52280
	I0815 16:31:18.628739    3848 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:31:18.629076    3848 main.go:141] libmachine: Using API Version  1
	I0815 16:31:18.629087    3848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:31:18.629285    3848 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:31:18.629412    3848 main.go:141] libmachine: (ha-138000-m03) Calling .DriverName
	I0815 16:31:18.629506    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetState
	I0815 16:31:18.629587    3848 main.go:141] libmachine: (ha-138000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:31:18.629688    3848 main.go:141] libmachine: (ha-138000-m03) DBG | hyperkit pid from json: 3119
	I0815 16:31:18.630594    3848 main.go:141] libmachine: (ha-138000-m03) DBG | hyperkit pid 3119 missing from process table
	I0815 16:31:18.630635    3848 fix.go:112] recreateIfNeeded on ha-138000-m03: state=Stopped err=<nil>
	I0815 16:31:18.630646    3848 main.go:141] libmachine: (ha-138000-m03) Calling .DriverName
	W0815 16:31:18.630738    3848 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 16:31:18.653953    3848 out.go:177] * Restarting existing hyperkit VM for "ha-138000-m03" ...
	I0815 16:31:18.711722    3848 main.go:141] libmachine: (ha-138000-m03) Calling .Start
	I0815 16:31:18.712041    3848 main.go:141] libmachine: (ha-138000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:31:18.712160    3848 main.go:141] libmachine: (ha-138000-m03) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/hyperkit.pid
	I0815 16:31:18.713734    3848 main.go:141] libmachine: (ha-138000-m03) DBG | hyperkit pid 3119 missing from process table
	I0815 16:31:18.713751    3848 main.go:141] libmachine: (ha-138000-m03) DBG | pid 3119 is in state "Stopped"
	I0815 16:31:18.713774    3848 main.go:141] libmachine: (ha-138000-m03) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/hyperkit.pid...
	I0815 16:31:18.713958    3848 main.go:141] libmachine: (ha-138000-m03) DBG | Using UUID 4228381e-4618-4b8b-ac7c-129bf380703a
	I0815 16:31:18.742338    3848 main.go:141] libmachine: (ha-138000-m03) DBG | Generated MAC 9e:18:89:2a:2d:99
	I0815 16:31:18.742370    3848 main.go:141] libmachine: (ha-138000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000
	I0815 16:31:18.742565    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:18 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"4228381e-4618-4b8b-ac7c-129bf380703a", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00037f470)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0815 16:31:18.742609    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:18 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"4228381e-4618-4b8b-ac7c-129bf380703a", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00037f470)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0815 16:31:18.742699    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:18 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "4228381e-4618-4b8b-ac7c-129bf380703a", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/ha-138000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/tty,log=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/bzimage,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-13
8000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000"}
	I0815 16:31:18.742751    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:18 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 4228381e-4618-4b8b-ac7c-129bf380703a -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/ha-138000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/tty,log=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/bzimage,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 co
nsole=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000"
	I0815 16:31:18.742790    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:18 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0815 16:31:18.744551    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:18 DEBUG: hyperkit: Pid is 4186
	I0815 16:31:18.745071    3848 main.go:141] libmachine: (ha-138000-m03) DBG | Attempt 0
	I0815 16:31:18.745087    3848 main.go:141] libmachine: (ha-138000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:31:18.745163    3848 main.go:141] libmachine: (ha-138000-m03) DBG | hyperkit pid from json: 4186
	I0815 16:31:18.746856    3848 main.go:141] libmachine: (ha-138000-m03) DBG | Searching for 9e:18:89:2a:2d:99 in /var/db/dhcpd_leases ...
	I0815 16:31:18.746937    3848 main.go:141] libmachine: (ha-138000-m03) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0815 16:31:18.746955    3848 main.go:141] libmachine: (ha-138000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:31:18.746980    3848 main.go:141] libmachine: (ha-138000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:31:18.746991    3848 main.go:141] libmachine: (ha-138000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be8e15}
	I0815 16:31:18.747032    3848 main.go:141] libmachine: (ha-138000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfdedc}
	I0815 16:31:18.747039    3848 main.go:141] libmachine: (ha-138000-m03) DBG | Found match: 9e:18:89:2a:2d:99
	I0815 16:31:18.747040    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetConfigRaw
	I0815 16:31:18.747045    3848 main.go:141] libmachine: (ha-138000-m03) DBG | IP: 192.169.0.7
	I0815 16:31:18.747774    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetIP
	I0815 16:31:18.747963    3848 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/config.json ...
	I0815 16:31:18.748524    3848 machine.go:93] provisionDockerMachine start ...
	I0815 16:31:18.748538    3848 main.go:141] libmachine: (ha-138000-m03) Calling .DriverName
	I0815 16:31:18.748670    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHHostname
	I0815 16:31:18.748765    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHPort
	I0815 16:31:18.748845    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:18.748950    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:18.749050    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHUsername
	I0815 16:31:18.749179    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:31:18.749325    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0815 16:31:18.749333    3848 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 16:31:18.752657    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:18 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0815 16:31:18.760833    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:18 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0815 16:31:18.761721    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0815 16:31:18.761738    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0815 16:31:18.761746    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0815 16:31:18.761755    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0815 16:31:19.145894    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:19 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0815 16:31:19.145910    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:19 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0815 16:31:19.260828    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0815 16:31:19.260843    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0815 16:31:19.260851    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0815 16:31:19.260862    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0815 16:31:19.261711    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:19 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0815 16:31:19.261721    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:19 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0815 16:31:24.888063    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:24 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0815 16:31:24.888137    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:24 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0815 16:31:24.888149    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:24 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0815 16:31:24.911372    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:24 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0815 16:31:29.819902    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 16:31:29.819917    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetMachineName
	I0815 16:31:29.820052    3848 buildroot.go:166] provisioning hostname "ha-138000-m03"
	I0815 16:31:29.820067    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetMachineName
	I0815 16:31:29.820174    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHHostname
	I0815 16:31:29.820268    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHPort
	I0815 16:31:29.820353    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:29.820429    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:29.820504    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHUsername
	I0815 16:31:29.820626    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:31:29.820777    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0815 16:31:29.820785    3848 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-138000-m03 && echo "ha-138000-m03" | sudo tee /etc/hostname
	I0815 16:31:29.898224    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-138000-m03
	
	I0815 16:31:29.898247    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHHostname
	I0815 16:31:29.898395    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHPort
	I0815 16:31:29.898481    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:29.898567    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:29.898654    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHUsername
	I0815 16:31:29.898789    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:31:29.898974    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0815 16:31:29.898986    3848 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-138000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-138000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-138000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 16:31:29.968919    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 16:31:29.968938    3848 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19452-977/.minikube CaCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19452-977/.minikube}
	I0815 16:31:29.968947    3848 buildroot.go:174] setting up certificates
	I0815 16:31:29.968952    3848 provision.go:84] configureAuth start
	I0815 16:31:29.968959    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetMachineName
	I0815 16:31:29.969088    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetIP
	I0815 16:31:29.969172    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHHostname
	I0815 16:31:29.969251    3848 provision.go:143] copyHostCerts
	I0815 16:31:29.969278    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem
	I0815 16:31:29.969343    3848 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem, removing ...
	I0815 16:31:29.969348    3848 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem
	I0815 16:31:29.969482    3848 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem (1082 bytes)
	I0815 16:31:29.969678    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem
	I0815 16:31:29.969716    3848 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem, removing ...
	I0815 16:31:29.969721    3848 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem
	I0815 16:31:29.969830    3848 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem (1123 bytes)
	I0815 16:31:29.969984    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem
	I0815 16:31:29.970023    3848 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem, removing ...
	I0815 16:31:29.970028    3848 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem
	I0815 16:31:29.970129    3848 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem (1679 bytes)
	I0815 16:31:29.970281    3848 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem org=jenkins.ha-138000-m03 san=[127.0.0.1 192.169.0.7 ha-138000-m03 localhost minikube]
	I0815 16:31:30.063220    3848 provision.go:177] copyRemoteCerts
	I0815 16:31:30.063270    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 16:31:30.063286    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHHostname
	I0815 16:31:30.063426    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHPort
	I0815 16:31:30.063510    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:30.063603    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHUsername
	I0815 16:31:30.063685    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/id_rsa Username:docker}
	I0815 16:31:30.101783    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0815 16:31:30.101861    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 16:31:30.121792    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0815 16:31:30.121868    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 16:31:30.141970    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0815 16:31:30.142077    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0815 16:31:30.161960    3848 provision.go:87] duration metric: took 192.993235ms to configureAuth
	I0815 16:31:30.161983    3848 buildroot.go:189] setting minikube options for container-runtime
	I0815 16:31:30.162167    3848 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:31:30.162199    3848 main.go:141] libmachine: (ha-138000-m03) Calling .DriverName
	I0815 16:31:30.162337    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHHostname
	I0815 16:31:30.162430    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHPort
	I0815 16:31:30.162521    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:30.162598    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:30.162675    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHUsername
	I0815 16:31:30.162784    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:31:30.162913    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0815 16:31:30.162921    3848 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0815 16:31:30.228685    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0815 16:31:30.228697    3848 buildroot.go:70] root file system type: tmpfs
	I0815 16:31:30.228781    3848 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0815 16:31:30.228793    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHHostname
	I0815 16:31:30.228929    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHPort
	I0815 16:31:30.229020    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:30.229108    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:30.229195    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHUsername
	I0815 16:31:30.229313    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:31:30.229444    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0815 16:31:30.229494    3848 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0815 16:31:30.305200    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0815 16:31:30.305217    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHHostname
	I0815 16:31:30.305352    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHPort
	I0815 16:31:30.305448    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:30.305543    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:30.305648    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHUsername
	I0815 16:31:30.305802    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:31:30.305948    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0815 16:31:30.305961    3848 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0815 16:31:31.969522    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0815 16:31:31.969536    3848 machine.go:96] duration metric: took 13.221047415s to provisionDockerMachine
	I0815 16:31:31.969548    3848 start.go:293] postStartSetup for "ha-138000-m03" (driver="hyperkit")
	I0815 16:31:31.969555    3848 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 16:31:31.969566    3848 main.go:141] libmachine: (ha-138000-m03) Calling .DriverName
	I0815 16:31:31.969757    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 16:31:31.969772    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHHostname
	I0815 16:31:31.969871    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHPort
	I0815 16:31:31.969976    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:31.970054    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHUsername
	I0815 16:31:31.970139    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/id_rsa Username:docker}
	I0815 16:31:32.013928    3848 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 16:31:32.017159    3848 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 16:31:32.017170    3848 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19452-977/.minikube/addons for local assets ...
	I0815 16:31:32.017274    3848 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19452-977/.minikube/files for local assets ...
	I0815 16:31:32.017462    3848 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> 14982.pem in /etc/ssl/certs
	I0815 16:31:32.017468    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> /etc/ssl/certs/14982.pem
	I0815 16:31:32.017677    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 16:31:32.029028    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem --> /etc/ssl/certs/14982.pem (1708 bytes)
	I0815 16:31:32.059130    3848 start.go:296] duration metric: took 89.573356ms for postStartSetup
	I0815 16:31:32.059162    3848 main.go:141] libmachine: (ha-138000-m03) Calling .DriverName
	I0815 16:31:32.059341    3848 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0815 16:31:32.059355    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHHostname
	I0815 16:31:32.059449    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHPort
	I0815 16:31:32.059534    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:32.059624    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHUsername
	I0815 16:31:32.059708    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/id_rsa Username:docker}
	I0815 16:31:32.098694    3848 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0815 16:31:32.098758    3848 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0815 16:31:32.152993    3848 fix.go:56] duration metric: took 13.533862474s for fixHost
	I0815 16:31:32.153017    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHHostname
	I0815 16:31:32.153168    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHPort
	I0815 16:31:32.153266    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:32.153360    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:32.153453    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHUsername
	I0815 16:31:32.153579    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:31:32.153719    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0815 16:31:32.153727    3848 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 16:31:32.220010    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723764692.474550074
	
	I0815 16:31:32.220026    3848 fix.go:216] guest clock: 1723764692.474550074
	I0815 16:31:32.220031    3848 fix.go:229] Guest: 2024-08-15 16:31:32.474550074 -0700 PDT Remote: 2024-08-15 16:31:32.153007 -0700 PDT m=+98.155027601 (delta=321.543074ms)
	I0815 16:31:32.220043    3848 fix.go:200] guest clock delta is within tolerance: 321.543074ms
	I0815 16:31:32.220047    3848 start.go:83] releasing machines lock for "ha-138000-m03", held for 13.600937599s
	I0815 16:31:32.220063    3848 main.go:141] libmachine: (ha-138000-m03) Calling .DriverName
	I0815 16:31:32.220193    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetIP
	I0815 16:31:32.242484    3848 out.go:177] * Found network options:
	I0815 16:31:32.262540    3848 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W0815 16:31:32.284750    3848 proxy.go:119] fail to check proxy env: Error ip not in block
	W0815 16:31:32.284780    3848 proxy.go:119] fail to check proxy env: Error ip not in block
	I0815 16:31:32.284808    3848 main.go:141] libmachine: (ha-138000-m03) Calling .DriverName
	I0815 16:31:32.285357    3848 main.go:141] libmachine: (ha-138000-m03) Calling .DriverName
	I0815 16:31:32.285486    3848 main.go:141] libmachine: (ha-138000-m03) Calling .DriverName
	I0815 16:31:32.285580    3848 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 16:31:32.285610    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHHostname
	W0815 16:31:32.285635    3848 proxy.go:119] fail to check proxy env: Error ip not in block
	W0815 16:31:32.285649    3848 proxy.go:119] fail to check proxy env: Error ip not in block
	I0815 16:31:32.285725    3848 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0815 16:31:32.285743    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHHostname
	I0815 16:31:32.285746    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHPort
	I0815 16:31:32.285912    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHPort
	I0815 16:31:32.285930    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:32.286051    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:32.286078    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHUsername
	I0815 16:31:32.286176    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHUsername
	I0815 16:31:32.286220    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/id_rsa Username:docker}
	I0815 16:31:32.286297    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/id_rsa Username:docker}
	W0815 16:31:32.322271    3848 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 16:31:32.322331    3848 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 16:31:32.369504    3848 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 16:31:32.369521    3848 start.go:495] detecting cgroup driver to use...
	I0815 16:31:32.369607    3848 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 16:31:32.385397    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0815 16:31:32.393793    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0815 16:31:32.401893    3848 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0815 16:31:32.401954    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0815 16:31:32.410021    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 16:31:32.418144    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0815 16:31:32.426371    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 16:31:32.434583    3848 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 16:31:32.442902    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0815 16:31:32.451254    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0815 16:31:32.459565    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0815 16:31:32.467863    3848 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 16:31:32.475226    3848 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 16:31:32.482724    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:31:32.583602    3848 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0815 16:31:32.603710    3848 start.go:495] detecting cgroup driver to use...
	I0815 16:31:32.603796    3848 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0815 16:31:32.620091    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 16:31:32.633248    3848 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 16:31:32.652532    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 16:31:32.666138    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0815 16:31:32.676424    3848 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0815 16:31:32.697061    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0815 16:31:32.707503    3848 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 16:31:32.722896    3848 ssh_runner.go:195] Run: which cri-dockerd
	I0815 16:31:32.725902    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0815 16:31:32.733526    3848 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0815 16:31:32.747908    3848 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0815 16:31:32.853084    3848 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0815 16:31:32.953384    3848 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0815 16:31:32.953408    3848 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0815 16:31:32.968013    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:31:33.073760    3848 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0815 16:31:35.380632    3848 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.306859581s)
	I0815 16:31:35.380695    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0815 16:31:35.391776    3848 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0815 16:31:35.404750    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0815 16:31:35.414823    3848 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0815 16:31:35.508250    3848 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0815 16:31:35.605930    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:31:35.720643    3848 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0815 16:31:35.734388    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0815 16:31:35.745523    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:31:35.849768    3848 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0815 16:31:35.916223    3848 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0815 16:31:35.916311    3848 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0815 16:31:35.920652    3848 start.go:563] Will wait 60s for crictl version
	I0815 16:31:35.920712    3848 ssh_runner.go:195] Run: which crictl
	I0815 16:31:35.923687    3848 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 16:31:35.951143    3848 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0815 16:31:35.951216    3848 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0815 16:31:35.970702    3848 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0815 16:31:36.011114    3848 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0815 16:31:36.053083    3848 out.go:177]   - env NO_PROXY=192.169.0.5
	I0815 16:31:36.074064    3848 out.go:177]   - env NO_PROXY=192.169.0.5,192.169.0.6
	I0815 16:31:36.094992    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetIP
	I0815 16:31:36.095254    3848 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0815 16:31:36.098563    3848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 16:31:36.107924    3848 mustload.go:65] Loading cluster: ha-138000
	I0815 16:31:36.108121    3848 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:31:36.108349    3848 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:31:36.108371    3848 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:31:36.117631    3848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52302
	I0815 16:31:36.118004    3848 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:31:36.118362    3848 main.go:141] libmachine: Using API Version  1
	I0815 16:31:36.118373    3848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:31:36.118572    3848 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:31:36.118683    3848 main.go:141] libmachine: (ha-138000) Calling .GetState
	I0815 16:31:36.118769    3848 main.go:141] libmachine: (ha-138000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:31:36.118858    3848 main.go:141] libmachine: (ha-138000) DBG | hyperkit pid from json: 3862
	I0815 16:31:36.119807    3848 host.go:66] Checking if "ha-138000" exists ...
	I0815 16:31:36.120056    3848 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:31:36.120079    3848 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:31:36.128888    3848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52304
	I0815 16:31:36.129245    3848 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:31:36.129613    3848 main.go:141] libmachine: Using API Version  1
	I0815 16:31:36.129628    3848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:31:36.129838    3848 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:31:36.129960    3848 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:31:36.130061    3848 certs.go:68] Setting up /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000 for IP: 192.169.0.7
	I0815 16:31:36.130067    3848 certs.go:194] generating shared ca certs ...
	I0815 16:31:36.130076    3848 certs.go:226] acquiring lock for ca certs: {Name:mka1c88019f6064d2983ac988db71a67aeb65696 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:31:36.130237    3848 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.key
	I0815 16:31:36.130321    3848 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.key
	I0815 16:31:36.130330    3848 certs.go:256] generating profile certs ...
	I0815 16:31:36.130443    3848 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/client.key
	I0815 16:31:36.130530    3848 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.key.c7e1c29f
	I0815 16:31:36.130604    3848 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.key
	I0815 16:31:36.130617    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0815 16:31:36.130638    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0815 16:31:36.130658    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0815 16:31:36.130676    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0815 16:31:36.130694    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0815 16:31:36.130735    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0815 16:31:36.130766    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0815 16:31:36.130785    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0815 16:31:36.130871    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498.pem (1338 bytes)
	W0815 16:31:36.130920    3848 certs.go:480] ignoring /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498_empty.pem, impossibly tiny 0 bytes
	I0815 16:31:36.130928    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 16:31:36.130977    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem (1082 bytes)
	I0815 16:31:36.131019    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem (1123 bytes)
	I0815 16:31:36.131050    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem (1679 bytes)
	I0815 16:31:36.131116    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem (1708 bytes)
	I0815 16:31:36.131153    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498.pem -> /usr/share/ca-certificates/1498.pem
	I0815 16:31:36.131174    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> /usr/share/ca-certificates/14982.pem
	I0815 16:31:36.131191    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:31:36.131214    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:31:36.131305    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:31:36.131384    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:31:36.131503    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:31:36.131582    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/id_rsa Username:docker}
	I0815 16:31:36.163135    3848 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0815 16:31:36.167195    3848 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0815 16:31:36.177598    3848 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0815 16:31:36.181380    3848 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0815 16:31:36.190596    3848 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0815 16:31:36.194001    3848 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0815 16:31:36.202689    3848 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0815 16:31:36.205906    3848 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0815 16:31:36.214386    3848 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0815 16:31:36.217472    3848 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0815 16:31:36.226235    3848 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0815 16:31:36.229561    3848 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0815 16:31:36.238534    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 16:31:36.259009    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 16:31:36.279081    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 16:31:36.299147    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 16:31:36.319142    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0815 16:31:36.339480    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 16:31:36.359157    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 16:31:36.379445    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 16:31:36.399731    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498.pem --> /usr/share/ca-certificates/1498.pem (1338 bytes)
	I0815 16:31:36.419506    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem --> /usr/share/ca-certificates/14982.pem (1708 bytes)
	I0815 16:31:36.439172    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 16:31:36.458742    3848 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0815 16:31:36.472323    3848 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0815 16:31:36.486349    3848 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0815 16:31:36.500064    3848 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0815 16:31:36.513680    3848 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0815 16:31:36.527778    3848 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0815 16:31:36.541967    3848 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0815 16:31:36.555903    3848 ssh_runner.go:195] Run: openssl version
	I0815 16:31:36.560554    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14982.pem && ln -fs /usr/share/ca-certificates/14982.pem /etc/ssl/certs/14982.pem"
	I0815 16:31:36.569772    3848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14982.pem
	I0815 16:31:36.573086    3848 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 23:14 /usr/share/ca-certificates/14982.pem
	I0815 16:31:36.573133    3848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14982.pem
	I0815 16:31:36.577434    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14982.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 16:31:36.585945    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 16:31:36.594481    3848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:31:36.598014    3848 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:05 /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:31:36.598056    3848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:31:36.602322    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 16:31:36.611545    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1498.pem && ln -fs /usr/share/ca-certificates/1498.pem /etc/ssl/certs/1498.pem"
	I0815 16:31:36.620267    3848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1498.pem
	I0815 16:31:36.623763    3848 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 23:14 /usr/share/ca-certificates/1498.pem
	I0815 16:31:36.623818    3848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1498.pem
	I0815 16:31:36.628404    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1498.pem /etc/ssl/certs/51391683.0"
	I0815 16:31:36.637260    3848 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 16:31:36.640760    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 16:31:36.645076    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 16:31:36.649285    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 16:31:36.653546    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 16:31:36.657801    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 16:31:36.662041    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 16:31:36.666218    3848 kubeadm.go:934] updating node {m03 192.169.0.7 8443 v1.31.0 docker true true} ...
	I0815 16:31:36.666285    3848 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-138000-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-138000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 16:31:36.666303    3848 kube-vip.go:115] generating kube-vip config ...
	I0815 16:31:36.666340    3848 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0815 16:31:36.678617    3848 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0815 16:31:36.678664    3848 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0815 16:31:36.678722    3848 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 16:31:36.686802    3848 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 16:31:36.686869    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0815 16:31:36.694600    3848 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0815 16:31:36.708358    3848 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 16:31:36.721865    3848 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0815 16:31:36.736604    3848 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0815 16:31:36.739496    3848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 16:31:36.748868    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:31:36.847387    3848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 16:31:36.862652    3848 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 16:31:36.862839    3848 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:31:36.884247    3848 out.go:177] * Verifying Kubernetes components...
	I0815 16:31:36.904597    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:31:37.032729    3848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 16:31:37.044674    3848 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19452-977/kubeconfig
	I0815 16:31:37.044869    3848 kapi.go:59] client config for ha-138000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/client.key", CAFile:"/Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, U
serAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x3a6cf60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0815 16:31:37.044913    3848 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0815 16:31:37.045078    3848 node_ready.go:35] waiting up to 6m0s for node "ha-138000-m03" to be "Ready" ...
	I0815 16:31:37.045127    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:37.045132    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:37.045138    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:37.045142    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:37.047558    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:37.545663    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:37.545719    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:37.545727    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:37.545756    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:37.548346    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:37.548775    3848 node_ready.go:49] node "ha-138000-m03" has status "Ready":"True"
	I0815 16:31:37.548786    3848 node_ready.go:38] duration metric: took 503.701087ms for node "ha-138000-m03" to be "Ready" ...
	I0815 16:31:37.548799    3848 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 16:31:37.548839    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0815 16:31:37.548848    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:37.548854    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:37.548859    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:37.555174    3848 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0815 16:31:37.561193    3848 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-dmgt5" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:37.561251    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-dmgt5
	I0815 16:31:37.561256    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:37.561262    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:37.561267    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:37.563487    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:37.564065    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:37.564072    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:37.564078    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:37.564081    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:37.566147    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:37.566458    3848 pod_ready.go:93] pod "coredns-6f6b679f8f-dmgt5" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:37.566468    3848 pod_ready.go:82] duration metric: took 5.259716ms for pod "coredns-6f6b679f8f-dmgt5" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:37.566475    3848 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-zc8jj" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:37.566514    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-zc8jj
	I0815 16:31:37.566519    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:37.566525    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:37.566529    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:37.568717    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:37.569347    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:37.569355    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:37.569361    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:37.569365    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:37.571508    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:37.571903    3848 pod_ready.go:93] pod "coredns-6f6b679f8f-zc8jj" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:37.571913    3848 pod_ready.go:82] duration metric: took 5.431792ms for pod "coredns-6f6b679f8f-zc8jj" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:37.571919    3848 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:37.571962    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000
	I0815 16:31:37.571967    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:37.571973    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:37.571976    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:37.574222    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:37.574650    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:37.574659    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:37.574665    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:37.574669    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:37.576917    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:37.577415    3848 pod_ready.go:93] pod "etcd-ha-138000" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:37.577426    3848 pod_ready.go:82] duration metric: took 5.501032ms for pod "etcd-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:37.577433    3848 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:37.577470    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m02
	I0815 16:31:37.577478    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:37.577485    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:37.577489    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:37.579610    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:37.580030    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:31:37.580038    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:37.580044    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:37.580049    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:37.582713    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:37.583250    3848 pod_ready.go:93] pod "etcd-ha-138000-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:37.583261    3848 pod_ready.go:82] duration metric: took 5.823471ms for pod "etcd-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:37.583269    3848 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:37.745749    3848 request.go:632] Waited for 162.439343ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:37.745806    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:37.745816    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:37.745824    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:37.745836    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:37.748134    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:37.945907    3848 request.go:632] Waited for 197.272516ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:37.945950    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:37.945956    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:37.945962    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:37.945966    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:37.948855    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:38.146195    3848 request.go:632] Waited for 62.814852ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:38.146243    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:38.146249    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:38.146296    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:38.146301    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:38.149137    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:38.346943    3848 request.go:632] Waited for 197.306674ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:38.346985    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:38.346994    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:38.347003    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:38.347010    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:38.349878    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:38.583459    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:38.583505    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:38.583514    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:38.583520    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:38.590031    3848 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0815 16:31:38.745745    3848 request.go:632] Waited for 155.336663ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:38.745818    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:38.745825    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:38.745831    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:38.745836    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:38.748530    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:39.083990    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:39.084003    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:39.084009    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:39.084013    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:39.086519    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:39.146468    3848 request.go:632] Waited for 59.248658ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:39.146510    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:39.146515    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:39.146521    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:39.146525    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:39.148504    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:39.583999    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:39.584017    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:39.584026    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:39.584029    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:39.589510    3848 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 16:31:39.590427    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:39.590438    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:39.590445    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:39.590449    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:39.592655    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:39.593056    3848 pod_ready.go:103] pod "etcd-ha-138000-m03" in "kube-system" namespace has status "Ready":"False"
	I0815 16:31:40.084185    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:40.084202    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:40.084209    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:40.084214    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:40.086419    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:40.087158    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:40.087166    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:40.087172    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:40.087196    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:40.088975    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:40.584037    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:40.584051    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:40.584058    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:40.584061    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:40.586450    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:40.586944    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:40.586952    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:40.586958    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:40.586963    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:40.589014    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:41.083405    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:41.083421    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:41.083427    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:41.083433    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:41.086228    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:41.086971    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:41.086978    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:41.086985    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:41.086990    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:41.097843    3848 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0815 16:31:41.583963    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:41.583987    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:41.583999    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:41.584008    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:41.587268    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:41.588066    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:41.588074    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:41.588079    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:41.588083    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:41.589716    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:42.083443    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:42.083462    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:42.083471    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:42.083482    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:42.085751    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:42.086179    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:42.086187    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:42.086194    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:42.086197    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:42.087825    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:42.088133    3848 pod_ready.go:103] pod "etcd-ha-138000-m03" in "kube-system" namespace has status "Ready":"False"
	I0815 16:31:42.584042    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:42.584070    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:42.584081    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:42.584089    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:42.587530    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:42.588287    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:42.588295    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:42.588301    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:42.588305    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:42.589868    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:43.085149    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:43.085164    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:43.085170    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:43.085174    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:43.087319    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:43.087818    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:43.087825    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:43.087831    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:43.087834    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:43.089562    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:43.583720    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:43.583737    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:43.583744    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:43.583747    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:43.586238    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:43.586831    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:43.586842    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:43.586849    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:43.586852    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:43.589092    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:44.084178    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:44.084189    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:44.084195    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:44.084198    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:44.086364    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:44.086790    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:44.086798    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:44.086805    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:44.086809    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:44.088812    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:44.089107    3848 pod_ready.go:103] pod "etcd-ha-138000-m03" in "kube-system" namespace has status "Ready":"False"
	I0815 16:31:44.584718    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:44.584743    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:44.584755    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:44.584763    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:44.587851    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:44.588606    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:44.588615    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:44.588621    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:44.588624    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:44.590403    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:45.083471    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:45.083486    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:45.083492    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:45.083496    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:45.085722    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:45.086170    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:45.086177    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:45.086186    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:45.086189    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:45.087992    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:45.583684    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:45.583761    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:45.583775    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:45.583782    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:45.586696    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:45.587281    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:45.587292    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:45.587300    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:45.587305    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:45.588851    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:46.083567    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:46.083581    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:46.083590    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:46.083595    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:46.086254    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:46.086706    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:46.086714    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:46.086720    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:46.086724    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:46.088505    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:46.583431    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:46.583454    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:46.583474    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:46.583477    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:46.586641    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:46.587367    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:46.587376    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:46.587383    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:46.587389    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:46.590271    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:46.590924    3848 pod_ready.go:103] pod "etcd-ha-138000-m03" in "kube-system" namespace has status "Ready":"False"
	I0815 16:31:47.085070    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:47.085088    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:47.085094    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:47.085097    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:47.087411    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:47.087834    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:47.087841    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:47.087847    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:47.087856    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:47.089857    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:47.583460    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:47.583510    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:47.583537    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:47.583547    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:47.586412    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:47.587147    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:47.587155    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:47.587161    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:47.587164    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:47.589077    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:48.084130    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:48.084172    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:48.084180    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:48.084184    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:48.086241    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:48.086700    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:48.086708    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:48.086715    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:48.086719    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:48.088392    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:48.583712    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:48.583726    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:48.583733    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:48.583736    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:48.585950    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:48.586404    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:48.586411    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:48.586417    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:48.586420    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:48.588064    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:49.084795    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:49.084810    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:49.084817    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:49.084821    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:49.087201    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:49.087638    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:49.087646    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:49.087651    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:49.087655    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:49.089294    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:49.089762    3848 pod_ready.go:103] pod "etcd-ha-138000-m03" in "kube-system" namespace has status "Ready":"False"
	I0815 16:31:49.584532    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:49.584586    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:49.584596    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:49.584602    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:49.586828    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:49.587368    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:49.587376    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:49.587381    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:49.587386    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:49.589092    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:50.084677    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:50.084702    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:50.084714    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:50.084720    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:50.090233    3848 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 16:31:50.091082    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:50.091090    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:50.091095    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:50.091098    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:50.093397    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:50.584557    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:50.584594    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:50.584607    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:50.584614    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:50.587331    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:50.588105    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:50.588113    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:50.588119    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:50.588122    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:50.589783    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:51.084222    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:51.084238    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:51.084245    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:51.084249    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:51.086498    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:51.086853    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:51.086860    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:51.086866    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:51.086869    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:51.088548    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:51.583648    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:51.583662    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:51.583669    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:51.583673    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:51.585837    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:51.586356    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:51.586364    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:51.586370    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:51.586374    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:51.588027    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:51.588324    3848 pod_ready.go:103] pod "etcd-ha-138000-m03" in "kube-system" namespace has status "Ready":"False"
	I0815 16:31:52.083439    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:52.083464    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:52.083477    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:52.083486    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:52.086839    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:52.087326    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:52.087334    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:52.087340    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:52.087344    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:52.089021    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:52.089421    3848 pod_ready.go:93] pod "etcd-ha-138000-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:52.089431    3848 pod_ready.go:82] duration metric: took 14.506206257s for pod "etcd-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:52.089443    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:52.089476    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000
	I0815 16:31:52.089481    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:52.089487    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:52.089490    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:52.091044    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:52.091506    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:52.091513    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:52.091519    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:52.091522    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:52.093067    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:52.093523    3848 pod_ready.go:93] pod "kube-apiserver-ha-138000" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:52.093534    3848 pod_ready.go:82] duration metric: took 4.083615ms for pod "kube-apiserver-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:52.093540    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:52.093569    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:31:52.093574    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:52.093579    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:52.093583    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:52.096079    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:52.096682    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:31:52.096689    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:52.096695    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:52.096698    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:52.098629    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:52.099014    3848 pod_ready.go:93] pod "kube-apiserver-ha-138000-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:52.099023    3848 pod_ready.go:82] duration metric: took 5.477344ms for pod "kube-apiserver-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:52.099030    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:52.099060    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m03
	I0815 16:31:52.099065    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:52.099071    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:52.099075    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:52.100773    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:52.101171    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:52.101178    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:52.101184    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:52.101188    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:52.108504    3848 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0815 16:31:52.599355    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m03
	I0815 16:31:52.599371    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:52.599378    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:52.599380    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:52.603474    3848 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 16:31:52.603827    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:52.603834    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:52.603839    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:52.603842    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:52.607400    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:53.100426    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m03
	I0815 16:31:53.100452    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:53.100465    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:53.100469    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:53.103591    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:53.103977    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:53.103985    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:53.103991    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:53.103995    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:53.105550    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:53.600030    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m03
	I0815 16:31:53.600056    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:53.600098    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:53.600106    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:53.603820    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:53.604279    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:53.604287    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:53.604292    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:53.604302    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:53.605948    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:54.100215    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m03
	I0815 16:31:54.100240    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:54.100248    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:54.100254    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:54.103639    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:54.104211    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:54.104222    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:54.104230    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:54.104236    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:54.106285    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:54.106596    3848 pod_ready.go:103] pod "kube-apiserver-ha-138000-m03" in "kube-system" namespace has status "Ready":"False"
	I0815 16:31:54.600238    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m03
	I0815 16:31:54.600262    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:54.600275    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:54.600280    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:54.603528    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:54.604248    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:54.604259    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:54.604268    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:54.604276    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:54.606261    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:54.606605    3848 pod_ready.go:93] pod "kube-apiserver-ha-138000-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:54.606614    3848 pod_ready.go:82] duration metric: took 2.507587207s for pod "kube-apiserver-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:54.606621    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:54.606652    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:54.606657    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:54.606663    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:54.606677    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:54.608196    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:54.608645    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:54.608652    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:54.608658    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:54.608661    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:54.610174    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:54.610543    3848 pod_ready.go:93] pod "kube-controller-manager-ha-138000" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:54.610551    3848 pod_ready.go:82] duration metric: took 3.924647ms for pod "kube-controller-manager-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:54.610565    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:54.610597    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000-m02
	I0815 16:31:54.610601    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:54.610607    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:54.610611    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:54.612220    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:54.612637    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:31:54.612644    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:54.612648    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:54.612652    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:54.614115    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:54.614453    3848 pod_ready.go:93] pod "kube-controller-manager-ha-138000-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:54.614461    3848 pod_ready.go:82] duration metric: took 3.890604ms for pod "kube-controller-manager-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:54.614467    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:54.685393    3848 request.go:632] Waited for 70.886034ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000-m03
	I0815 16:31:54.685542    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000-m03
	I0815 16:31:54.685554    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:54.685565    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:54.685572    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:54.689462    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:54.884047    3848 request.go:632] Waited for 194.079873ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:54.884179    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:54.884194    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:54.884206    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:54.884216    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:54.887378    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:54.887638    3848 pod_ready.go:93] pod "kube-controller-manager-ha-138000-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:54.887648    3848 pod_ready.go:82] duration metric: took 273.176916ms for pod "kube-controller-manager-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:54.887655    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cznkn" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:55.084696    3848 request.go:632] Waited for 197.006461ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cznkn
	I0815 16:31:55.084754    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cznkn
	I0815 16:31:55.084760    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:55.084766    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:55.084770    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:55.086486    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:55.284932    3848 request.go:632] Waited for 198.019424ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:55.285014    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:55.285023    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:55.285031    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:55.285034    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:55.287587    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:55.288003    3848 pod_ready.go:93] pod "kube-proxy-cznkn" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:55.288012    3848 pod_ready.go:82] duration metric: took 400.352996ms for pod "kube-proxy-cznkn" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:55.288019    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kxghx" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:55.484813    3848 request.go:632] Waited for 196.749045ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kxghx
	I0815 16:31:55.484909    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kxghx
	I0815 16:31:55.484933    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:55.484946    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:55.484952    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:55.487936    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:55.684903    3848 request.go:632] Waited for 196.468256ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:55.684989    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:55.684999    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:55.685010    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:55.685019    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:55.688164    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:55.688606    3848 pod_ready.go:93] pod "kube-proxy-kxghx" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:55.688619    3848 pod_ready.go:82] duration metric: took 400.595564ms for pod "kube-proxy-kxghx" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:55.688628    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qpth7" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:55.884647    3848 request.go:632] Waited for 195.972571ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qpth7
	I0815 16:31:55.884703    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qpth7
	I0815 16:31:55.884734    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:55.884828    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:55.884842    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:55.887780    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:56.085059    3848 request.go:632] Waited for 196.76753ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m04
	I0815 16:31:56.085155    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m04
	I0815 16:31:56.085166    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:56.085178    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:56.085187    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:56.088438    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:56.088843    3848 pod_ready.go:98] node "ha-138000-m04" hosting pod "kube-proxy-qpth7" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-138000-m04" has status "Ready":"Unknown"
	I0815 16:31:56.088858    3848 pod_ready.go:82] duration metric: took 400.224535ms for pod "kube-proxy-qpth7" in "kube-system" namespace to be "Ready" ...
	E0815 16:31:56.088867    3848 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-138000-m04" hosting pod "kube-proxy-qpth7" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-138000-m04" has status "Ready":"Unknown"
	I0815 16:31:56.088873    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tf79g" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:56.284412    3848 request.go:632] Waited for 195.467169ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tf79g
	I0815 16:31:56.284533    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tf79g
	I0815 16:31:56.284544    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:56.284556    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:56.284567    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:56.287997    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:56.483641    3848 request.go:632] Waited for 195.132786ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:31:56.483717    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:31:56.483778    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:56.483801    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:56.483810    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:56.486922    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:56.487377    3848 pod_ready.go:93] pod "kube-proxy-tf79g" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:56.487387    3848 pod_ready.go:82] duration metric: took 398.50917ms for pod "kube-proxy-tf79g" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:56.487394    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:56.684509    3848 request.go:632] Waited for 197.075187ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000
	I0815 16:31:56.684584    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000
	I0815 16:31:56.684592    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:56.684600    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:56.684606    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:56.687177    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:56.884267    3848 request.go:632] Waited for 196.705982ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:56.884375    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:56.884384    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:56.884392    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:56.884396    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:56.886486    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:56.886846    3848 pod_ready.go:93] pod "kube-scheduler-ha-138000" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:56.886854    3848 pod_ready.go:82] duration metric: took 399.455831ms for pod "kube-scheduler-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:56.886860    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:57.083869    3848 request.go:632] Waited for 196.961301ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000-m02
	I0815 16:31:57.083950    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000-m02
	I0815 16:31:57.083960    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:57.083983    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:57.083992    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:57.087081    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:57.285517    3848 request.go:632] Waited for 197.962246ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:31:57.285639    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:31:57.285649    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:57.285659    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:57.285667    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:57.288947    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:57.289317    3848 pod_ready.go:93] pod "kube-scheduler-ha-138000-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:57.289331    3848 pod_ready.go:82] duration metric: took 402.465658ms for pod "kube-scheduler-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:57.289340    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:57.483919    3848 request.go:632] Waited for 194.531212ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000-m03
	I0815 16:31:57.484018    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000-m03
	I0815 16:31:57.484029    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:57.484041    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:57.484049    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:57.486736    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:57.683533    3848 request.go:632] Waited for 196.372817ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:57.683619    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:57.683630    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:57.683642    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:57.683649    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:57.686767    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:57.687131    3848 pod_ready.go:93] pod "kube-scheduler-ha-138000-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:57.687146    3848 pod_ready.go:82] duration metric: took 397.799248ms for pod "kube-scheduler-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:57.687155    3848 pod_ready.go:39] duration metric: took 20.138416099s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 16:31:57.687170    3848 api_server.go:52] waiting for apiserver process to appear ...
	I0815 16:31:57.687237    3848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 16:31:57.700597    3848 api_server.go:72] duration metric: took 20.837986375s to wait for apiserver process to appear ...
	I0815 16:31:57.700610    3848 api_server.go:88] waiting for apiserver healthz status ...
	I0815 16:31:57.700622    3848 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0815 16:31:57.703621    3848 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0815 16:31:57.703653    3848 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0815 16:31:57.703658    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:57.703664    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:57.703670    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:57.704168    3848 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0815 16:31:57.704198    3848 api_server.go:141] control plane version: v1.31.0
	I0815 16:31:57.704207    3848 api_server.go:131] duration metric: took 3.590796ms to wait for apiserver health ...
	I0815 16:31:57.704213    3848 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 16:31:57.884532    3848 request.go:632] Waited for 180.27549ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0815 16:31:57.884634    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0815 16:31:57.884645    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:57.884656    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:57.884661    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:57.889257    3848 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 16:31:57.894492    3848 system_pods.go:59] 26 kube-system pods found
	I0815 16:31:57.894504    3848 system_pods.go:61] "coredns-6f6b679f8f-dmgt5" [47d73953-ec2c-4f17-b2b8-d6a9b5e5a316] Running
	I0815 16:31:57.894508    3848 system_pods.go:61] "coredns-6f6b679f8f-zc8jj" [b4a9df39-b09d-4bc3-97f6-b3176ff8e842] Running
	I0815 16:31:57.894511    3848 system_pods.go:61] "etcd-ha-138000" [c2e7e508-a157-4581-a446-2f48e51c0c16] Running
	I0815 16:31:57.894514    3848 system_pods.go:61] "etcd-ha-138000-m02" [4f371d3f-821b-4eae-ab52-5b75d90acd67] Running
	I0815 16:31:57.894516    3848 system_pods.go:61] "etcd-ha-138000-m03" [ebdd90b3-c3b2-44c2-a411-4f6afa67a455] Running
	I0815 16:31:57.894519    3848 system_pods.go:61] "kindnet-77dc6" [24bfc069-9446-43a7-aa61-308c1a62a20e] Running
	I0815 16:31:57.894522    3848 system_pods.go:61] "kindnet-dsvxt" [7645852b-8535-4cb4-8324-15c9b55176e3] Running
	I0815 16:31:57.894525    3848 system_pods.go:61] "kindnet-m887r" [ba31865b-c712-47a8-9fd8-06420270ac8b] Running
	I0815 16:31:57.894527    3848 system_pods.go:61] "kindnet-z6mnx" [fc6211a0-df99-4350-90ba-a18f74bd1bfc] Running
	I0815 16:31:57.894530    3848 system_pods.go:61] "kube-apiserver-ha-138000" [bbc684f7-d8e2-44f2-987a-df9d05cf54fc] Running
	I0815 16:31:57.894534    3848 system_pods.go:61] "kube-apiserver-ha-138000-m02" [c30e129b-f475-4131-a25e-7eeecb39cbea] Running
	I0815 16:31:57.894537    3848 system_pods.go:61] "kube-apiserver-ha-138000-m03" [90304f02-10f2-446d-b731-674df5602401] Running
	I0815 16:31:57.894541    3848 system_pods.go:61] "kube-controller-manager-ha-138000" [1d4148d1-9798-4662-91de-d9a7dae634e2] Running
	I0815 16:31:57.894545    3848 system_pods.go:61] "kube-controller-manager-ha-138000-m02" [ad693dae-cb0a-46ca-955b-33a9465d911a] Running
	I0815 16:31:57.894547    3848 system_pods.go:61] "kube-controller-manager-ha-138000-m03" [11aa509a-dade-4b66-b157-4d4cccb7f349] Running
	I0815 16:31:57.894550    3848 system_pods.go:61] "kube-proxy-cznkn" [61cd1a9a-80fb-4d0e-a4dd-f5ed2d6ddc0f] Running
	I0815 16:31:57.894553    3848 system_pods.go:61] "kube-proxy-kxghx" [27f61dcc-2812-4038-af2a-aa9f46033869] Running
	I0815 16:31:57.894555    3848 system_pods.go:61] "kube-proxy-qpth7" [a343f80b-0fe9-4c88-9782-5fbf9a6170d1] Running
	I0815 16:31:57.894558    3848 system_pods.go:61] "kube-proxy-tf79g" [ef8a2aeb-55c4-436e-a306-da8b93c9707f] Running
	I0815 16:31:57.894560    3848 system_pods.go:61] "kube-scheduler-ha-138000" [ee1d3eb0-596d-4bf0-bed4-dbd38b286e38] Running
	I0815 16:31:57.894563    3848 system_pods.go:61] "kube-scheduler-ha-138000-m02" [1cc29309-49ad-427e-862a-758cc5711eef] Running
	I0815 16:31:57.894566    3848 system_pods.go:61] "kube-scheduler-ha-138000-m03" [2e3efb3c-6903-4306-adec-d4a47b3a56bd] Running
	I0815 16:31:57.894572    3848 system_pods.go:61] "kube-vip-ha-138000" [1b8c66e5-ffaa-4d4b-a220-8eb664b79d91] Running
	I0815 16:31:57.894575    3848 system_pods.go:61] "kube-vip-ha-138000-m02" [a0fe2ba6-1ace-456f-ab2e-15c43303dcad] Running
	I0815 16:31:57.894578    3848 system_pods.go:61] "kube-vip-ha-138000-m03" [44fe2bd5-bd48-4f86-b38a-7e4b3554174c] Running
	I0815 16:31:57.894581    3848 system_pods.go:61] "storage-provisioner" [35f3a9d7-cb68-4b10-82a2-1bd72d8d1aa6] Running
	I0815 16:31:57.894585    3848 system_pods.go:74] duration metric: took 190.369062ms to wait for pod list to return data ...
	I0815 16:31:57.894590    3848 default_sa.go:34] waiting for default service account to be created ...
	I0815 16:31:58.083903    3848 request.go:632] Waited for 189.255195ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0815 16:31:58.083992    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0815 16:31:58.084004    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:58.084016    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:58.084024    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:58.087624    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:58.087687    3848 default_sa.go:45] found service account: "default"
	I0815 16:31:58.087696    3848 default_sa.go:55] duration metric: took 193.101509ms for default service account to be created ...
	I0815 16:31:58.087703    3848 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 16:31:58.284595    3848 request.go:632] Waited for 196.812141ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0815 16:31:58.284716    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0815 16:31:58.284728    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:58.284740    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:58.284748    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:58.290177    3848 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 16:31:58.295724    3848 system_pods.go:86] 26 kube-system pods found
	I0815 16:31:58.295738    3848 system_pods.go:89] "coredns-6f6b679f8f-dmgt5" [47d73953-ec2c-4f17-b2b8-d6a9b5e5a316] Running
	I0815 16:31:58.295742    3848 system_pods.go:89] "coredns-6f6b679f8f-zc8jj" [b4a9df39-b09d-4bc3-97f6-b3176ff8e842] Running
	I0815 16:31:58.295747    3848 system_pods.go:89] "etcd-ha-138000" [c2e7e508-a157-4581-a446-2f48e51c0c16] Running
	I0815 16:31:58.295759    3848 system_pods.go:89] "etcd-ha-138000-m02" [4f371d3f-821b-4eae-ab52-5b75d90acd67] Running
	I0815 16:31:58.295765    3848 system_pods.go:89] "etcd-ha-138000-m03" [ebdd90b3-c3b2-44c2-a411-4f6afa67a455] Running
	I0815 16:31:58.295768    3848 system_pods.go:89] "kindnet-77dc6" [24bfc069-9446-43a7-aa61-308c1a62a20e] Running
	I0815 16:31:58.295779    3848 system_pods.go:89] "kindnet-dsvxt" [7645852b-8535-4cb4-8324-15c9b55176e3] Running
	I0815 16:31:58.295783    3848 system_pods.go:89] "kindnet-m887r" [ba31865b-c712-47a8-9fd8-06420270ac8b] Running
	I0815 16:31:58.295786    3848 system_pods.go:89] "kindnet-z6mnx" [fc6211a0-df99-4350-90ba-a18f74bd1bfc] Running
	I0815 16:31:58.295789    3848 system_pods.go:89] "kube-apiserver-ha-138000" [bbc684f7-d8e2-44f2-987a-df9d05cf54fc] Running
	I0815 16:31:58.295791    3848 system_pods.go:89] "kube-apiserver-ha-138000-m02" [c30e129b-f475-4131-a25e-7eeecb39cbea] Running
	I0815 16:31:58.295795    3848 system_pods.go:89] "kube-apiserver-ha-138000-m03" [90304f02-10f2-446d-b731-674df5602401] Running
	I0815 16:31:58.295798    3848 system_pods.go:89] "kube-controller-manager-ha-138000" [1d4148d1-9798-4662-91de-d9a7dae634e2] Running
	I0815 16:31:58.295801    3848 system_pods.go:89] "kube-controller-manager-ha-138000-m02" [ad693dae-cb0a-46ca-955b-33a9465d911a] Running
	I0815 16:31:58.295804    3848 system_pods.go:89] "kube-controller-manager-ha-138000-m03" [11aa509a-dade-4b66-b157-4d4cccb7f349] Running
	I0815 16:31:58.295807    3848 system_pods.go:89] "kube-proxy-cznkn" [61cd1a9a-80fb-4d0e-a4dd-f5ed2d6ddc0f] Running
	I0815 16:31:58.295814    3848 system_pods.go:89] "kube-proxy-kxghx" [27f61dcc-2812-4038-af2a-aa9f46033869] Running
	I0815 16:31:58.295818    3848 system_pods.go:89] "kube-proxy-qpth7" [a343f80b-0fe9-4c88-9782-5fbf9a6170d1] Running
	I0815 16:31:58.295821    3848 system_pods.go:89] "kube-proxy-tf79g" [ef8a2aeb-55c4-436e-a306-da8b93c9707f] Running
	I0815 16:31:58.295824    3848 system_pods.go:89] "kube-scheduler-ha-138000" [ee1d3eb0-596d-4bf0-bed4-dbd38b286e38] Running
	I0815 16:31:58.295827    3848 system_pods.go:89] "kube-scheduler-ha-138000-m02" [1cc29309-49ad-427e-862a-758cc5711eef] Running
	I0815 16:31:58.295830    3848 system_pods.go:89] "kube-scheduler-ha-138000-m03" [2e3efb3c-6903-4306-adec-d4a47b3a56bd] Running
	I0815 16:31:58.295833    3848 system_pods.go:89] "kube-vip-ha-138000" [1b8c66e5-ffaa-4d4b-a220-8eb664b79d91] Running
	I0815 16:31:58.295836    3848 system_pods.go:89] "kube-vip-ha-138000-m02" [a0fe2ba6-1ace-456f-ab2e-15c43303dcad] Running
	I0815 16:31:58.295838    3848 system_pods.go:89] "kube-vip-ha-138000-m03" [44fe2bd5-bd48-4f86-b38a-7e4b3554174c] Running
	I0815 16:31:58.295841    3848 system_pods.go:89] "storage-provisioner" [35f3a9d7-cb68-4b10-82a2-1bd72d8d1aa6] Running
	I0815 16:31:58.295845    3848 system_pods.go:126] duration metric: took 208.13908ms to wait for k8s-apps to be running ...
	I0815 16:31:58.295851    3848 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 16:31:58.295902    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 16:31:58.307696    3848 system_svc.go:56] duration metric: took 11.840404ms WaitForService to wait for kubelet
	I0815 16:31:58.307710    3848 kubeadm.go:582] duration metric: took 21.445104276s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 16:31:58.307721    3848 node_conditions.go:102] verifying NodePressure condition ...
	I0815 16:31:58.483467    3848 request.go:632] Waited for 175.699042ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0815 16:31:58.483523    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0815 16:31:58.483531    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:58.483546    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:58.483605    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:58.487271    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:58.488234    3848 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 16:31:58.488246    3848 node_conditions.go:123] node cpu capacity is 2
	I0815 16:31:58.488253    3848 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 16:31:58.488256    3848 node_conditions.go:123] node cpu capacity is 2
	I0815 16:31:58.488259    3848 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 16:31:58.488263    3848 node_conditions.go:123] node cpu capacity is 2
	I0815 16:31:58.488266    3848 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 16:31:58.488269    3848 node_conditions.go:123] node cpu capacity is 2
	I0815 16:31:58.488272    3848 node_conditions.go:105] duration metric: took 180.547852ms to run NodePressure ...
	I0815 16:31:58.488280    3848 start.go:241] waiting for startup goroutines ...
	I0815 16:31:58.488303    3848 start.go:255] writing updated cluster config ...
	I0815 16:31:58.511626    3848 out.go:201] 
	I0815 16:31:58.532028    3848 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:31:58.532166    3848 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/config.json ...
	I0815 16:31:58.553589    3848 out.go:177] * Starting "ha-138000-m04" worker node in "ha-138000" cluster
	I0815 16:31:58.594430    3848 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 16:31:58.594502    3848 cache.go:56] Caching tarball of preloaded images
	I0815 16:31:58.594676    3848 preload.go:172] Found /Users/jenkins/minikube-integration/19452-977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0815 16:31:58.594694    3848 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 16:31:58.594833    3848 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/config.json ...
	I0815 16:31:58.595712    3848 start.go:360] acquireMachinesLock for ha-138000-m04: {Name:mkac8034c8a60617271f2754a03dcaf74fc4fef7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 16:31:58.595816    3848 start.go:364] duration metric: took 79.794µs to acquireMachinesLock for "ha-138000-m04"
	I0815 16:31:58.595841    3848 start.go:96] Skipping create...Using existing machine configuration
	I0815 16:31:58.595851    3848 fix.go:54] fixHost starting: m04
	I0815 16:31:58.596274    3848 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:31:58.596311    3848 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:31:58.605762    3848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52311
	I0815 16:31:58.606137    3848 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:31:58.606475    3848 main.go:141] libmachine: Using API Version  1
	I0815 16:31:58.606484    3848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:31:58.606737    3848 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:31:58.606878    3848 main.go:141] libmachine: (ha-138000-m04) Calling .DriverName
	I0815 16:31:58.606971    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetState
	I0815 16:31:58.607059    3848 main.go:141] libmachine: (ha-138000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:31:58.607149    3848 main.go:141] libmachine: (ha-138000-m04) DBG | hyperkit pid from json: 3240
	I0815 16:31:58.608054    3848 main.go:141] libmachine: (ha-138000-m04) DBG | hyperkit pid 3240 missing from process table
	I0815 16:31:58.608090    3848 fix.go:112] recreateIfNeeded on ha-138000-m04: state=Stopped err=<nil>
	I0815 16:31:58.608101    3848 main.go:141] libmachine: (ha-138000-m04) Calling .DriverName
	W0815 16:31:58.608193    3848 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 16:31:58.629670    3848 out.go:177] * Restarting existing hyperkit VM for "ha-138000-m04" ...
	I0815 16:31:58.671397    3848 main.go:141] libmachine: (ha-138000-m04) Calling .Start
	I0815 16:31:58.671607    3848 main.go:141] libmachine: (ha-138000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:31:58.671648    3848 main.go:141] libmachine: (ha-138000-m04) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/hyperkit.pid
	I0815 16:31:58.671760    3848 main.go:141] libmachine: (ha-138000-m04) DBG | Using UUID e49817f2-f6c4-46a0-a846-8a8b2da04ea9
	I0815 16:31:58.700620    3848 main.go:141] libmachine: (ha-138000-m04) DBG | Generated MAC 66:d1:6e:6f:24:26
	I0815 16:31:58.700645    3848 main.go:141] libmachine: (ha-138000-m04) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000
	I0815 16:31:58.700779    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:58 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"e49817f2-f6c4-46a0-a846-8a8b2da04ea9", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002ad680)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0815 16:31:58.700809    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:58 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"e49817f2-f6c4-46a0-a846-8a8b2da04ea9", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002ad680)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0815 16:31:58.700889    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:58 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "e49817f2-f6c4-46a0-a846-8a8b2da04ea9", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/ha-138000-m04.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/tty,log=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/bzimage,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-13
8000-m04/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000"}
	I0815 16:31:58.700927    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:58 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U e49817f2-f6c4-46a0-a846-8a8b2da04ea9 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/ha-138000-m04.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/tty,log=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/console-ring -f kexec,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/bzimage,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/initrd,earlyprintk=serial loglevel=3 console=ttyS0 co
nsole=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000"
	I0815 16:31:58.700973    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:58 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0815 16:31:58.702332    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:58 DEBUG: hyperkit: Pid is 4201
	I0815 16:31:58.702793    3848 main.go:141] libmachine: (ha-138000-m04) DBG | Attempt 0
	I0815 16:31:58.702829    3848 main.go:141] libmachine: (ha-138000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:31:58.702904    3848 main.go:141] libmachine: (ha-138000-m04) DBG | hyperkit pid from json: 4201
	I0815 16:31:58.703953    3848 main.go:141] libmachine: (ha-138000-m04) DBG | Searching for 66:d1:6e:6f:24:26 in /var/db/dhcpd_leases ...
	I0815 16:31:58.704027    3848 main.go:141] libmachine: (ha-138000-m04) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0815 16:31:58.704048    3848 main.go:141] libmachine: (ha-138000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 16:31:58.704066    3848 main.go:141] libmachine: (ha-138000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:31:58.704081    3848 main.go:141] libmachine: (ha-138000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:31:58.704095    3848 main.go:141] libmachine: (ha-138000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be8e15}
	I0815 16:31:58.704105    3848 main.go:141] libmachine: (ha-138000-m04) DBG | Found match: 66:d1:6e:6f:24:26
	I0815 16:31:58.704118    3848 main.go:141] libmachine: (ha-138000-m04) DBG | IP: 192.169.0.8
	I0815 16:31:58.704138    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetConfigRaw
	I0815 16:31:58.704996    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetIP
	I0815 16:31:58.705244    3848 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/config.json ...
	I0815 16:31:58.705856    3848 machine.go:93] provisionDockerMachine start ...
	I0815 16:31:58.705869    3848 main.go:141] libmachine: (ha-138000-m04) Calling .DriverName
	I0815 16:31:58.705978    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHHostname
	I0815 16:31:58.706098    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHPort
	I0815 16:31:58.706206    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:31:58.706333    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:31:58.706439    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHUsername
	I0815 16:31:58.706614    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:31:58.706786    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0815 16:31:58.706796    3848 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 16:31:58.710462    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:58 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0815 16:31:58.720101    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:58 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0815 16:31:58.720991    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0815 16:31:58.721013    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0815 16:31:58.721022    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0815 16:31:58.721032    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0815 16:31:59.105309    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:59 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0815 16:31:59.105335    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:59 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0815 16:31:59.220059    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0815 16:31:59.220079    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0815 16:31:59.220089    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0815 16:31:59.220095    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0815 16:31:59.220911    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:59 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0815 16:31:59.220942    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:59 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0815 16:32:04.889008    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:32:04 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0815 16:32:04.889030    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:32:04 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0815 16:32:04.889049    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:32:04 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0815 16:32:04.912331    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:32:04 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0815 16:32:33.787060    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 16:32:33.787084    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetMachineName
	I0815 16:32:33.787215    3848 buildroot.go:166] provisioning hostname "ha-138000-m04"
	I0815 16:32:33.787226    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetMachineName
	I0815 16:32:33.787318    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHHostname
	I0815 16:32:33.787397    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHPort
	I0815 16:32:33.787483    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:33.787564    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:33.787640    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHUsername
	I0815 16:32:33.787765    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:32:33.787937    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0815 16:32:33.787945    3848 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-138000-m04 && echo "ha-138000-m04" | sudo tee /etc/hostname
	I0815 16:32:33.847992    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-138000-m04
	
	I0815 16:32:33.848008    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHHostname
	I0815 16:32:33.848137    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHPort
	I0815 16:32:33.848240    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:33.848322    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:33.848426    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHUsername
	I0815 16:32:33.848548    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:32:33.848705    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0815 16:32:33.848716    3848 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-138000-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-138000-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-138000-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 16:32:33.904813    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 16:32:33.904838    3848 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19452-977/.minikube CaCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19452-977/.minikube}
	I0815 16:32:33.904848    3848 buildroot.go:174] setting up certificates
	I0815 16:32:33.904853    3848 provision.go:84] configureAuth start
	I0815 16:32:33.904860    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetMachineName
	I0815 16:32:33.904995    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetIP
	I0815 16:32:33.905084    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHHostname
	I0815 16:32:33.905176    3848 provision.go:143] copyHostCerts
	I0815 16:32:33.905203    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem
	I0815 16:32:33.905264    3848 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem, removing ...
	I0815 16:32:33.905280    3848 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem
	I0815 16:32:33.915862    3848 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem (1082 bytes)
	I0815 16:32:33.936338    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem
	I0815 16:32:33.936399    3848 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem, removing ...
	I0815 16:32:33.936405    3848 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem
	I0815 16:32:33.960707    3848 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem (1123 bytes)
	I0815 16:32:33.961241    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem
	I0815 16:32:33.961296    3848 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem, removing ...
	I0815 16:32:33.961303    3848 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem
	I0815 16:32:33.961391    3848 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem (1679 bytes)
	I0815 16:32:33.961771    3848 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem org=jenkins.ha-138000-m04 san=[127.0.0.1 192.169.0.8 ha-138000-m04 localhost minikube]
	I0815 16:32:34.048242    3848 provision.go:177] copyRemoteCerts
	I0815 16:32:34.048297    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 16:32:34.048312    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHHostname
	I0815 16:32:34.048461    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHPort
	I0815 16:32:34.048558    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:34.048644    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHUsername
	I0815 16:32:34.048725    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/id_rsa Username:docker}
	I0815 16:32:34.079744    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0815 16:32:34.079820    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 16:32:34.099832    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0815 16:32:34.099904    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0815 16:32:34.119955    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0815 16:32:34.120035    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 16:32:34.140743    3848 provision.go:87] duration metric: took 235.600662ms to configureAuth
	I0815 16:32:34.140757    3848 buildroot.go:189] setting minikube options for container-runtime
	I0815 16:32:34.140940    3848 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:32:34.140975    3848 main.go:141] libmachine: (ha-138000-m04) Calling .DriverName
	I0815 16:32:34.141106    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHHostname
	I0815 16:32:34.141218    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHPort
	I0815 16:32:34.141307    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:34.141393    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:34.141471    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHUsername
	I0815 16:32:34.141580    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:32:34.141705    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0815 16:32:34.141713    3848 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0815 16:32:34.191590    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0815 16:32:34.191604    3848 buildroot.go:70] root file system type: tmpfs
	I0815 16:32:34.191676    3848 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0815 16:32:34.191686    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHHostname
	I0815 16:32:34.191824    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHPort
	I0815 16:32:34.191939    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:34.192031    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:34.192133    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHUsername
	I0815 16:32:34.192260    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:32:34.192405    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0815 16:32:34.192449    3848 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0815 16:32:34.253544    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	Environment=NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0815 16:32:34.253562    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHHostname
	I0815 16:32:34.253696    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHPort
	I0815 16:32:34.253789    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:34.253863    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:34.253953    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHUsername
	I0815 16:32:34.254084    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:32:34.254223    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0815 16:32:34.254235    3848 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0815 16:32:35.839568    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0815 16:32:35.839584    3848 machine.go:96] duration metric: took 37.11179722s to provisionDockerMachine
	I0815 16:32:35.839591    3848 start.go:293] postStartSetup for "ha-138000-m04" (driver="hyperkit")
	I0815 16:32:35.839597    3848 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 16:32:35.839606    3848 main.go:141] libmachine: (ha-138000-m04) Calling .DriverName
	I0815 16:32:35.839797    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 16:32:35.839811    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHHostname
	I0815 16:32:35.839906    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHPort
	I0815 16:32:35.839987    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:35.840069    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHUsername
	I0815 16:32:35.840139    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/id_rsa Username:docker}
	I0815 16:32:35.872247    3848 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 16:32:35.875358    3848 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 16:32:35.875369    3848 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19452-977/.minikube/addons for local assets ...
	I0815 16:32:35.875469    3848 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19452-977/.minikube/files for local assets ...
	I0815 16:32:35.875649    3848 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> 14982.pem in /etc/ssl/certs
	I0815 16:32:35.875656    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> /etc/ssl/certs/14982.pem
	I0815 16:32:35.875856    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 16:32:35.884005    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem --> /etc/ssl/certs/14982.pem (1708 bytes)
	I0815 16:32:35.903707    3848 start.go:296] duration metric: took 64.039683ms for postStartSetup
	I0815 16:32:35.903730    3848 main.go:141] libmachine: (ha-138000-m04) Calling .DriverName
	I0815 16:32:35.903903    3848 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0815 16:32:35.903917    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHHostname
	I0815 16:32:35.904012    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHPort
	I0815 16:32:35.904095    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:35.904168    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHUsername
	I0815 16:32:35.904243    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/id_rsa Username:docker}
	I0815 16:32:35.936201    3848 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0815 16:32:35.936261    3848 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0815 16:32:35.969821    3848 fix.go:56] duration metric: took 37.351909726s for fixHost
	I0815 16:32:35.969846    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHHostname
	I0815 16:32:35.969981    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHPort
	I0815 16:32:35.970066    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:35.970160    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:35.970248    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHUsername
	I0815 16:32:35.970357    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:32:35.970503    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0815 16:32:35.970511    3848 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 16:32:36.019594    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723764755.882644542
	
	I0815 16:32:36.019607    3848 fix.go:216] guest clock: 1723764755.882644542
	I0815 16:32:36.019612    3848 fix.go:229] Guest: 2024-08-15 16:32:35.882644542 -0700 PDT Remote: 2024-08-15 16:32:35.969836 -0700 PDT m=+161.949888378 (delta=-87.191458ms)
	I0815 16:32:36.019628    3848 fix.go:200] guest clock delta is within tolerance: -87.191458ms
	I0815 16:32:36.019633    3848 start.go:83] releasing machines lock for "ha-138000-m04", held for 37.401695552s
	I0815 16:32:36.019652    3848 main.go:141] libmachine: (ha-138000-m04) Calling .DriverName
	I0815 16:32:36.019780    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetIP
	I0815 16:32:36.042030    3848 out.go:177] * Found network options:
	I0815 16:32:36.062147    3848 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	W0815 16:32:36.083026    3848 proxy.go:119] fail to check proxy env: Error ip not in block
	W0815 16:32:36.083070    3848 proxy.go:119] fail to check proxy env: Error ip not in block
	W0815 16:32:36.083084    3848 proxy.go:119] fail to check proxy env: Error ip not in block
	I0815 16:32:36.083102    3848 main.go:141] libmachine: (ha-138000-m04) Calling .DriverName
	I0815 16:32:36.083847    3848 main.go:141] libmachine: (ha-138000-m04) Calling .DriverName
	I0815 16:32:36.084058    3848 main.go:141] libmachine: (ha-138000-m04) Calling .DriverName
	I0815 16:32:36.084240    3848 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 16:32:36.084283    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHHostname
	W0815 16:32:36.084353    3848 proxy.go:119] fail to check proxy env: Error ip not in block
	W0815 16:32:36.084375    3848 proxy.go:119] fail to check proxy env: Error ip not in block
	W0815 16:32:36.084394    3848 proxy.go:119] fail to check proxy env: Error ip not in block
	I0815 16:32:36.084487    3848 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0815 16:32:36.084508    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHPort
	I0815 16:32:36.084519    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHHostname
	I0815 16:32:36.084733    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:36.084745    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHPort
	I0815 16:32:36.084957    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHUsername
	I0815 16:32:36.084992    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:36.085156    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHUsername
	I0815 16:32:36.085189    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/id_rsa Username:docker}
	I0815 16:32:36.085315    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/id_rsa Username:docker}
	W0815 16:32:36.114740    3848 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 16:32:36.114803    3848 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 16:32:36.163124    3848 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 16:32:36.163145    3848 start.go:495] detecting cgroup driver to use...
	I0815 16:32:36.163258    3848 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 16:32:36.179534    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0815 16:32:36.187872    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0815 16:32:36.196474    3848 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0815 16:32:36.196528    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0815 16:32:36.204752    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 16:32:36.212948    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0815 16:32:36.221222    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 16:32:36.229511    3848 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 16:32:36.238142    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0815 16:32:36.246643    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0815 16:32:36.254862    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0815 16:32:36.263281    3848 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 16:32:36.270596    3848 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 16:32:36.278325    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:32:36.377803    3848 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0815 16:32:36.396329    3848 start.go:495] detecting cgroup driver to use...
	I0815 16:32:36.396399    3848 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0815 16:32:36.411192    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 16:32:36.423875    3848 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 16:32:36.437859    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 16:32:36.449142    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0815 16:32:36.460191    3848 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0815 16:32:36.479331    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0815 16:32:36.491179    3848 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 16:32:36.506341    3848 ssh_runner.go:195] Run: which cri-dockerd
	I0815 16:32:36.509156    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0815 16:32:36.517306    3848 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0815 16:32:36.530887    3848 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0815 16:32:36.631226    3848 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0815 16:32:36.742723    3848 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0815 16:32:36.742750    3848 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0815 16:32:36.756569    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:32:36.851332    3848 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0815 16:32:39.062024    3848 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.208594053s)
	I0815 16:32:39.062086    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0815 16:32:39.072858    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0815 16:32:39.083135    3848 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0815 16:32:39.180174    3848 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0815 16:32:39.296201    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:32:39.397264    3848 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0815 16:32:39.409768    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0815 16:32:39.419919    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:32:39.520172    3848 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0815 16:32:39.580712    3848 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0815 16:32:39.580787    3848 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0815 16:32:39.585172    3848 start.go:563] Will wait 60s for crictl version
	I0815 16:32:39.585233    3848 ssh_runner.go:195] Run: which crictl
	I0815 16:32:39.588436    3848 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 16:32:39.616400    3848 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0815 16:32:39.616480    3848 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0815 16:32:39.635416    3848 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0815 16:32:39.674509    3848 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0815 16:32:39.715170    3848 out.go:177]   - env NO_PROXY=192.169.0.5
	I0815 16:32:39.736207    3848 out.go:177]   - env NO_PROXY=192.169.0.5,192.169.0.6
	I0815 16:32:39.756990    3848 out.go:177]   - env NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	I0815 16:32:39.778125    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetIP
	I0815 16:32:39.778383    3848 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0815 16:32:39.781735    3848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 16:32:39.792335    3848 mustload.go:65] Loading cluster: ha-138000
	I0815 16:32:39.792518    3848 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:32:39.792754    3848 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:32:39.792777    3848 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:32:39.801573    3848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52333
	I0815 16:32:39.801892    3848 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:32:39.802227    3848 main.go:141] libmachine: Using API Version  1
	I0815 16:32:39.802235    3848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:32:39.802431    3848 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:32:39.802539    3848 main.go:141] libmachine: (ha-138000) Calling .GetState
	I0815 16:32:39.802617    3848 main.go:141] libmachine: (ha-138000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:32:39.802698    3848 main.go:141] libmachine: (ha-138000) DBG | hyperkit pid from json: 3862
	I0815 16:32:39.803669    3848 host.go:66] Checking if "ha-138000" exists ...
	I0815 16:32:39.803925    3848 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:32:39.803948    3848 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:32:39.812411    3848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52335
	I0815 16:32:39.812752    3848 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:32:39.813108    3848 main.go:141] libmachine: Using API Version  1
	I0815 16:32:39.813119    3848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:32:39.813352    3848 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:32:39.813479    3848 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:32:39.813578    3848 certs.go:68] Setting up /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000 for IP: 192.169.0.8
	I0815 16:32:39.813584    3848 certs.go:194] generating shared ca certs ...
	I0815 16:32:39.813595    3848 certs.go:226] acquiring lock for ca certs: {Name:mka1c88019f6064d2983ac988db71a67aeb65696 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:32:39.813775    3848 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.key
	I0815 16:32:39.813853    3848 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.key
	I0815 16:32:39.813863    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0815 16:32:39.813888    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0815 16:32:39.813907    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0815 16:32:39.813924    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0815 16:32:39.814032    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498.pem (1338 bytes)
	W0815 16:32:39.814088    3848 certs.go:480] ignoring /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498_empty.pem, impossibly tiny 0 bytes
	I0815 16:32:39.814098    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 16:32:39.814142    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem (1082 bytes)
	I0815 16:32:39.814184    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem (1123 bytes)
	I0815 16:32:39.814213    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem (1679 bytes)
	I0815 16:32:39.814289    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem (1708 bytes)
	I0815 16:32:39.814324    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:32:39.814344    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498.pem -> /usr/share/ca-certificates/1498.pem
	I0815 16:32:39.814362    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> /usr/share/ca-certificates/14982.pem
	I0815 16:32:39.814393    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 16:32:39.834330    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 16:32:39.854069    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 16:32:39.873582    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 16:32:39.893143    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 16:32:39.912645    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498.pem --> /usr/share/ca-certificates/1498.pem (1338 bytes)
	I0815 16:32:39.932104    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem --> /usr/share/ca-certificates/14982.pem (1708 bytes)
	I0815 16:32:39.951872    3848 ssh_runner.go:195] Run: openssl version
	I0815 16:32:39.956296    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 16:32:39.966055    3848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:32:39.970287    3848 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:05 /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:32:39.970366    3848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:32:39.974984    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 16:32:39.984513    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1498.pem && ln -fs /usr/share/ca-certificates/1498.pem /etc/ssl/certs/1498.pem"
	I0815 16:32:39.994098    3848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1498.pem
	I0815 16:32:39.997571    3848 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 23:14 /usr/share/ca-certificates/1498.pem
	I0815 16:32:39.997641    3848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1498.pem
	I0815 16:32:40.002092    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1498.pem /etc/ssl/certs/51391683.0"
	I0815 16:32:40.011802    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14982.pem && ln -fs /usr/share/ca-certificates/14982.pem /etc/ssl/certs/14982.pem"
	I0815 16:32:40.021159    3848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14982.pem
	I0815 16:32:40.024904    3848 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 23:14 /usr/share/ca-certificates/14982.pem
	I0815 16:32:40.024948    3848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14982.pem
	I0815 16:32:40.029236    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14982.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 16:32:40.038952    3848 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 16:32:40.042186    3848 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0815 16:32:40.042220    3848 kubeadm.go:934] updating node {m04 192.169.0.8 0 v1.31.0 docker false true} ...
	I0815 16:32:40.042279    3848 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-138000-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.8
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-138000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 16:32:40.042327    3848 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 16:32:40.050823    3848 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 16:32:40.050877    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0815 16:32:40.059254    3848 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0815 16:32:40.072800    3848 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 16:32:40.086506    3848 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0815 16:32:40.089484    3848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 16:32:40.099835    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:32:40.204428    3848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 16:32:40.219160    3848 start.go:235] Will wait 6m0s for node &{Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0815 16:32:40.219362    3848 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:32:40.240563    3848 out.go:177] * Verifying Kubernetes components...
	I0815 16:32:40.281239    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:32:40.407726    3848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 16:32:40.424517    3848 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19452-977/kubeconfig
	I0815 16:32:40.424746    3848 kapi.go:59] client config for ha-138000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/client.key", CAFile:"/Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, U
serAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x3a6cf60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0815 16:32:40.424790    3848 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0815 16:32:40.424946    3848 node_ready.go:35] waiting up to 6m0s for node "ha-138000-m04" to be "Ready" ...
	I0815 16:32:40.424985    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m04
	I0815 16:32:40.424990    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:40.424997    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:40.425001    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:40.429695    3848 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 16:32:40.925699    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m04
	I0815 16:32:40.925718    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:40.925730    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:40.925735    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:40.928643    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:32:40.929158    3848 node_ready.go:49] node "ha-138000-m04" has status "Ready":"True"
	I0815 16:32:40.929170    3848 node_ready.go:38] duration metric: took 503.811986ms for node "ha-138000-m04" to be "Ready" ...
	I0815 16:32:40.929177    3848 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 16:32:40.929232    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0815 16:32:40.929240    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:40.929248    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:40.929253    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:40.932889    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:40.938534    3848 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-dmgt5" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:40.938586    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-dmgt5
	I0815 16:32:40.938591    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:40.938597    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:40.938601    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:40.940630    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:32:40.941135    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:32:40.941143    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:40.941149    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:40.941155    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:40.943092    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:32:40.943437    3848 pod_ready.go:93] pod "coredns-6f6b679f8f-dmgt5" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:40.943446    3848 pod_ready.go:82] duration metric: took 4.897461ms for pod "coredns-6f6b679f8f-dmgt5" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:40.943453    3848 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-zc8jj" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:40.943484    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-zc8jj
	I0815 16:32:40.943489    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:40.943495    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:40.943498    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:40.945206    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:32:40.945690    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:32:40.945697    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:40.945703    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:40.945706    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:40.947257    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:32:40.947557    3848 pod_ready.go:93] pod "coredns-6f6b679f8f-zc8jj" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:40.947566    3848 pod_ready.go:82] duration metric: took 4.10464ms for pod "coredns-6f6b679f8f-zc8jj" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:40.947580    3848 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:40.947611    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000
	I0815 16:32:40.947616    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:40.947622    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:40.947625    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:40.949227    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:32:40.949563    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:32:40.949570    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:40.949576    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:40.949579    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:40.951175    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:32:40.951528    3848 pod_ready.go:93] pod "etcd-ha-138000" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:40.951537    3848 pod_ready.go:82] duration metric: took 3.9487ms for pod "etcd-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:40.951543    3848 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:40.951576    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m02
	I0815 16:32:40.951581    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:40.951587    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:40.951590    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:40.953480    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:32:40.953888    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:32:40.953896    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:40.953902    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:40.953906    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:40.956234    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:32:40.956704    3848 pod_ready.go:93] pod "etcd-ha-138000-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:40.956713    3848 pod_ready.go:82] duration metric: took 5.161406ms for pod "etcd-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:40.956719    3848 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:41.126239    3848 request.go:632] Waited for 169.295221ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:32:41.126310    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:32:41.126326    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:41.126342    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:41.126348    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:41.129984    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:41.327227    3848 request.go:632] Waited for 196.482674ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:32:41.327282    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:32:41.327327    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:41.327340    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:41.327346    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:41.330300    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:32:41.330659    3848 pod_ready.go:93] pod "etcd-ha-138000-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:41.330669    3848 pod_ready.go:82] duration metric: took 373.660924ms for pod "etcd-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:41.330681    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:41.526448    3848 request.go:632] Waited for 195.583591ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000
	I0815 16:32:41.526543    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000
	I0815 16:32:41.526554    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:41.526567    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:41.526577    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:41.532016    3848 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 16:32:41.726373    3848 request.go:632] Waited for 193.637616ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:32:41.726406    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:32:41.726411    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:41.726417    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:41.726421    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:41.728634    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:32:41.729100    3848 pod_ready.go:93] pod "kube-apiserver-ha-138000" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:41.729111    3848 pod_ready.go:82] duration metric: took 398.123683ms for pod "kube-apiserver-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:41.729118    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:41.926911    3848 request.go:632] Waited for 197.603818ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:32:41.927000    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:32:41.927007    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:41.927013    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:41.927017    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:41.929844    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:32:42.128208    3848 request.go:632] Waited for 197.600405ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:32:42.128281    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:32:42.128287    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:42.128294    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:42.128297    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:42.130511    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:32:42.130893    3848 pod_ready.go:93] pod "kube-apiserver-ha-138000-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:42.130903    3848 pod_ready.go:82] duration metric: took 401.488989ms for pod "kube-apiserver-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:42.130910    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:42.326992    3848 request.go:632] Waited for 195.89771ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m03
	I0815 16:32:42.327104    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m03
	I0815 16:32:42.327117    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:42.327128    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:42.327133    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:42.330012    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:32:42.528721    3848 request.go:632] Waited for 197.972621ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:32:42.528810    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:32:42.528823    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:42.528832    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:42.528839    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:42.531660    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:32:42.532014    3848 pod_ready.go:93] pod "kube-apiserver-ha-138000-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:42.532023    3848 pod_ready.go:82] duration metric: took 400.824225ms for pod "kube-apiserver-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:42.532031    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:42.728571    3848 request.go:632] Waited for 196.361424ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:32:42.728605    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:32:42.728614    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:42.728647    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:42.728651    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:42.731003    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:32:42.928382    3848 request.go:632] Waited for 196.815945ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:32:42.928456    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:32:42.928464    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:42.928472    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:42.928479    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:42.930971    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:32:42.931316    3848 pod_ready.go:93] pod "kube-controller-manager-ha-138000" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:42.931325    3848 pod_ready.go:82] duration metric: took 399.007322ms for pod "kube-controller-manager-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:42.931332    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:43.127763    3848 request.go:632] Waited for 196.250954ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000-m02
	I0815 16:32:43.127817    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000-m02
	I0815 16:32:43.127830    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:43.127894    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:43.127907    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:43.131065    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:43.327999    3848 request.go:632] Waited for 196.235394ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:32:43.328052    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:32:43.328063    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:43.328073    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:43.328081    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:43.331302    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:43.331997    3848 pod_ready.go:93] pod "kube-controller-manager-ha-138000-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:43.332007    3848 pod_ready.go:82] duration metric: took 400.403262ms for pod "kube-controller-manager-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:43.332014    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:43.527716    3848 request.go:632] Waited for 195.527377ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000-m03
	I0815 16:32:43.527817    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000-m03
	I0815 16:32:43.527829    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:43.527841    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:43.527847    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:43.530965    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:43.728236    3848 request.go:632] Waited for 196.484633ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:32:43.728298    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:32:43.728309    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:43.728320    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:43.728328    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:43.731883    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:43.732469    3848 pod_ready.go:93] pod "kube-controller-manager-ha-138000-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:43.732478    3848 pod_ready.go:82] duration metric: took 400.192656ms for pod "kube-controller-manager-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:43.732484    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cznkn" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:43.928265    3848 request.go:632] Waited for 195.61986ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cznkn
	I0815 16:32:43.928325    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cznkn
	I0815 16:32:43.928331    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:43.928337    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:43.928341    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:43.930546    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:32:44.128606    3848 request.go:632] Waited for 197.39717ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:32:44.128669    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:32:44.128682    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:44.128693    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:44.128702    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:44.132274    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:44.132835    3848 pod_ready.go:93] pod "kube-proxy-cznkn" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:44.132847    3848 pod_ready.go:82] duration metric: took 400.10235ms for pod "kube-proxy-cznkn" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:44.132856    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kxghx" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:44.328927    3848 request.go:632] Waited for 195.898781ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kxghx
	I0815 16:32:44.328980    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kxghx
	I0815 16:32:44.328988    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:44.328997    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:44.329003    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:44.332425    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:44.528721    3848 request.go:632] Waited for 195.542417ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:32:44.528856    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:32:44.528867    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:44.528878    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:44.528884    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:44.532391    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:44.532921    3848 pod_ready.go:93] pod "kube-proxy-kxghx" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:44.532933    3848 pod_ready.go:82] duration metric: took 399.821933ms for pod "kube-proxy-kxghx" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:44.532943    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qpth7" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:44.729675    3848 request.go:632] Waited for 196.549445ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qpth7
	I0815 16:32:44.729804    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qpth7
	I0815 16:32:44.729823    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:44.729835    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:44.729845    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:44.733406    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:44.929790    3848 request.go:632] Waited for 195.811353ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m04
	I0815 16:32:44.929844    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m04
	I0815 16:32:44.929899    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:44.929913    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:44.929919    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:44.933124    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:44.933608    3848 pod_ready.go:93] pod "kube-proxy-qpth7" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:44.933620    3848 pod_ready.go:82] duration metric: took 400.423483ms for pod "kube-proxy-qpth7" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:44.933628    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tf79g" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:45.129188    3848 request.go:632] Waited for 195.397689ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tf79g
	I0815 16:32:45.129249    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tf79g
	I0815 16:32:45.129265    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:45.129278    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:45.129288    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:45.132523    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:45.329740    3848 request.go:632] Waited for 196.543831ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:32:45.329842    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:32:45.329853    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:45.329864    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:45.329893    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:45.332959    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:45.333655    3848 pod_ready.go:93] pod "kube-proxy-tf79g" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:45.333668    3848 pod_ready.go:82] duration metric: took 399.799233ms for pod "kube-proxy-tf79g" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:45.333677    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:45.528959    3848 request.go:632] Waited for 195.085989ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000
	I0815 16:32:45.528999    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000
	I0815 16:32:45.529004    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:45.529011    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:45.529014    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:45.531204    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:32:45.730380    3848 request.go:632] Waited for 198.71096ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:32:45.730470    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:32:45.730488    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:45.730540    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:45.730549    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:45.733632    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:45.734206    3848 pod_ready.go:93] pod "kube-scheduler-ha-138000" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:45.734218    3848 pod_ready.go:82] duration metric: took 400.300105ms for pod "kube-scheduler-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:45.734227    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:45.929618    3848 request.go:632] Waited for 195.186999ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000-m02
	I0815 16:32:45.929667    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000-m02
	I0815 16:32:45.929676    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:45.929687    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:45.929695    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:45.933262    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:46.130161    3848 request.go:632] Waited for 196.149607ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:32:46.130227    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:32:46.130233    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:46.130239    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:46.130243    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:46.132556    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:32:46.132872    3848 pod_ready.go:93] pod "kube-scheduler-ha-138000-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:46.132882    3848 pod_ready.go:82] duration metric: took 398.424946ms for pod "kube-scheduler-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:46.132892    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:46.330062    3848 request.go:632] Waited for 196.982598ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000-m03
	I0815 16:32:46.330155    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000-m03
	I0815 16:32:46.330165    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:46.330189    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:46.330198    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:46.333748    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:46.529626    3848 request.go:632] Waited for 195.297916ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:32:46.529687    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:32:46.529698    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:46.529709    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:46.529716    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:46.532896    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:46.533425    3848 pod_ready.go:93] pod "kube-scheduler-ha-138000-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:46.533437    3848 pod_ready.go:82] duration metric: took 400.316472ms for pod "kube-scheduler-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:46.533445    3848 pod_ready.go:39] duration metric: took 5.600601602s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 16:32:46.533458    3848 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 16:32:46.533512    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 16:32:46.545338    3848 system_svc.go:56] duration metric: took 11.868784ms WaitForService to wait for kubelet
	I0815 16:32:46.545353    3848 kubeadm.go:582] duration metric: took 6.321930293s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 16:32:46.545367    3848 node_conditions.go:102] verifying NodePressure condition ...
	I0815 16:32:46.729678    3848 request.go:632] Waited for 184.161888ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0815 16:32:46.729775    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0815 16:32:46.729791    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:46.729803    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:46.729814    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:46.733356    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:46.734408    3848 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 16:32:46.734417    3848 node_conditions.go:123] node cpu capacity is 2
	I0815 16:32:46.734438    3848 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 16:32:46.734446    3848 node_conditions.go:123] node cpu capacity is 2
	I0815 16:32:46.734451    3848 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 16:32:46.734454    3848 node_conditions.go:123] node cpu capacity is 2
	I0815 16:32:46.734459    3848 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 16:32:46.734463    3848 node_conditions.go:123] node cpu capacity is 2
	I0815 16:32:46.734466    3848 node_conditions.go:105] duration metric: took 188.991963ms to run NodePressure ...
	I0815 16:32:46.734473    3848 start.go:241] waiting for startup goroutines ...
	I0815 16:32:46.734487    3848 start.go:255] writing updated cluster config ...
	I0815 16:32:46.734849    3848 ssh_runner.go:195] Run: rm -f paused
	I0815 16:32:46.777324    3848 start.go:600] kubectl: 1.29.2, cluster: 1.31.0 (minor skew: 2)
	I0815 16:32:46.799308    3848 out.go:201] 
	W0815 16:32:46.820067    3848 out.go:270] ! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.0.
	I0815 16:32:46.840863    3848 out.go:177]   - Want kubectl v1.31.0? Try 'minikube kubectl -- get pods -A'
	I0815 16:32:46.862128    3848 out.go:177] * Done! kubectl is now configured to use "ha-138000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Aug 15 23:30:58 ha-138000 dockerd[1153]: time="2024-08-15T23:30:58.911495531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:30:58 ha-138000 dockerd[1153]: time="2024-08-15T23:30:58.913627850Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 15 23:30:58 ha-138000 dockerd[1153]: time="2024-08-15T23:30:58.913666039Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 15 23:30:58 ha-138000 dockerd[1153]: time="2024-08-15T23:30:58.913677629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:30:58 ha-138000 dockerd[1153]: time="2024-08-15T23:30:58.913771765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:30:58 ha-138000 dockerd[1153]: time="2024-08-15T23:30:58.917066694Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 15 23:30:58 ha-138000 dockerd[1153]: time="2024-08-15T23:30:58.917195390Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 15 23:30:58 ha-138000 dockerd[1153]: time="2024-08-15T23:30:58.917208298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:30:58 ha-138000 dockerd[1153]: time="2024-08-15T23:30:58.917385910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:30:59 ha-138000 dockerd[1153]: time="2024-08-15T23:30:59.886428053Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 15 23:30:59 ha-138000 dockerd[1153]: time="2024-08-15T23:30:59.886532806Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 15 23:30:59 ha-138000 dockerd[1153]: time="2024-08-15T23:30:59.886546833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:30:59 ha-138000 dockerd[1153]: time="2024-08-15T23:30:59.886748891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:31:00 ha-138000 dockerd[1153]: time="2024-08-15T23:31:00.892633352Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 15 23:31:00 ha-138000 dockerd[1153]: time="2024-08-15T23:31:00.893116347Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 15 23:31:00 ha-138000 dockerd[1153]: time="2024-08-15T23:31:00.893221469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:31:00 ha-138000 dockerd[1153]: time="2024-08-15T23:31:00.893411350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:31:01 ha-138000 dockerd[1153]: time="2024-08-15T23:31:01.876748430Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 15 23:31:01 ha-138000 dockerd[1153]: time="2024-08-15T23:31:01.876814366Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 15 23:31:01 ha-138000 dockerd[1153]: time="2024-08-15T23:31:01.876834716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:31:01 ha-138000 dockerd[1153]: time="2024-08-15T23:31:01.876961405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:31:03 ha-138000 dockerd[1153]: time="2024-08-15T23:31:03.874516614Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 15 23:31:03 ha-138000 dockerd[1153]: time="2024-08-15T23:31:03.874614005Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 15 23:31:03 ha-138000 dockerd[1153]: time="2024-08-15T23:31:03.874643416Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:31:03 ha-138000 dockerd[1153]: time="2024-08-15T23:31:03.874757663Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f4a0ec142726f       045733566833c                                                                                         3 minutes ago       Running             kube-controller-manager   7                   787273cdcffa4       kube-controller-manager-ha-138000
	9b4d9e684266a       cbb01a7bd410d                                                                                         3 minutes ago       Running             coredns                   1                   e616bc4c74358       coredns-6f6b679f8f-dmgt5
	80f5762ff7596       8c811b4aec35f                                                                                         3 minutes ago       Running             busybox                   1                   67d12a31b7b49       busybox-7dff88458-wgww9
	fea7f52d9a276       6e38f40d628db                                                                                         3 minutes ago       Running             storage-provisioner       1                   b65d03e28df57       storage-provisioner
	a06770ea62d50       cbb01a7bd410d                                                                                         3 minutes ago       Running             coredns                   1                   730316cfbee9c       coredns-6f6b679f8f-zc8jj
	3102e608c7d69       ad83b2ca7b09e                                                                                         3 minutes ago       Running             kube-proxy                1                   824e79b38bfeb       kube-proxy-cznkn
	d35ee43272703       12968670680f4                                                                                         3 minutes ago       Running             kindnet-cni               1                   28b2ff94764c2       kindnet-77dc6
	67b207257b40d       2e96e5913fc06                                                                                         3 minutes ago       Running             etcd                      3                   5fbdeb5e7a6b9       etcd-ha-138000
	c2ddb52a9846f       1766f54c897f0                                                                                         3 minutes ago       Running             kube-scheduler            2                   d5e3465359549       kube-scheduler-ha-138000
	2d2c6da6f7b74       38af8ddebf499                                                                                         3 minutes ago       Running             kube-vip                  1                   2bb58ad8c8f10       kube-vip-ha-138000
	2ed9ae0427266       045733566833c                                                                                         3 minutes ago       Exited              kube-controller-manager   6                   787273cdcffa4       kube-controller-manager-ha-138000
	a6baf6e21d6c9       604f5db92eaa8                                                                                         3 minutes ago       Running             kube-apiserver            6                   0de6d71d60938       kube-apiserver-ha-138000
	5ed11c46e0eb7       604f5db92eaa8                                                                                         4 minutes ago       Exited              kube-apiserver            5                   7152268f8eec4       kube-apiserver-ha-138000
	59dac0b44544a       2e96e5913fc06                                                                                         5 minutes ago       Exited              etcd                      2                   ec285d4826baa       etcd-ha-138000
	efbc09be8eda5       38af8ddebf499                                                                                         9 minutes ago       Exited              kube-vip                  0                   0c665afd15e6f       kube-vip-ha-138000
	ac6935271595c       1766f54c897f0                                                                                         9 minutes ago       Exited              kube-scheduler            1                   07c1c62e41d3a       kube-scheduler-ha-138000
	8f20284cd3969       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   12 minutes ago      Exited              busybox                   0                   bfc975a528b9e       busybox-7dff88458-wgww9
	42f5d82b00417       cbb01a7bd410d                                                                                         14 minutes ago      Exited              coredns                   0                   10891f8fbffcc       coredns-6f6b679f8f-dmgt5
	3e8b806ef4f33       cbb01a7bd410d                                                                                         14 minutes ago      Exited              coredns                   0                   096ab15603b01       coredns-6f6b679f8f-zc8jj
	6a1122913bb18       6e38f40d628db                                                                                         14 minutes ago      Exited              storage-provisioner       0                   e30dde4a5a10d       storage-provisioner
	c2a16126718b3       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              14 minutes ago      Exited              kindnet-cni               0                   e260a94a203af       kindnet-77dc6
	fc2e141007efb       ad83b2ca7b09e                                                                                         14 minutes ago      Exited              kube-proxy                0                   5b40cdd6b2c24       kube-proxy-cznkn
	
	
	==> coredns [3e8b806ef4f3] <==
	[INFO] 10.244.2.2:44773 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000075522s
	[INFO] 10.244.2.2:53805 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000098349s
	[INFO] 10.244.2.2:34369 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000122495s
	[INFO] 10.244.0.4:59671 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000077646s
	[INFO] 10.244.0.4:41185 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000079139s
	[INFO] 10.244.0.4:42405 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000092065s
	[INFO] 10.244.0.4:54373 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000049998s
	[INFO] 10.244.0.4:57169 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000050383s
	[INFO] 10.244.0.4:37825 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000085108s
	[INFO] 10.244.1.2:59685 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000072268s
	[INFO] 10.244.1.2:32923 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073054s
	[INFO] 10.244.2.2:50876 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000068102s
	[INFO] 10.244.2.2:54719 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000762s
	[INFO] 10.244.0.4:57395 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000091608s
	[INFO] 10.244.0.4:37936 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000031052s
	[INFO] 10.244.1.2:58408 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000088888s
	[INFO] 10.244.1.2:42731 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000114857s
	[INFO] 10.244.1.2:41638 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000082664s
	[INFO] 10.244.2.2:52666 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000092331s
	[INFO] 10.244.2.2:41501 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000093116s
	[INFO] 10.244.0.4:48200 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000075447s
	[INFO] 10.244.0.4:35056 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000091854s
	[INFO] 10.244.0.4:36257 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000057922s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [42f5d82b0041] <==
	[INFO] 10.244.1.2:50104 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.009876264s
	[INFO] 10.244.0.4:33653 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115506s
	[INFO] 10.244.0.4:45180 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000042438s
	[INFO] 10.244.1.2:60312 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000068925s
	[INFO] 10.244.1.2:38521 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000124425s
	[INFO] 10.244.1.2:51675 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000125646s
	[INFO] 10.244.1.2:33974 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000078827s
	[INFO] 10.244.2.2:38966 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000078816s
	[INFO] 10.244.2.2:56056 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000620092s
	[INFO] 10.244.2.2:32787 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000109221s
	[INFO] 10.244.2.2:55701 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000039601s
	[INFO] 10.244.0.4:52543 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000083971s
	[INFO] 10.244.0.4:55050 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000146353s
	[INFO] 10.244.1.2:52165 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100415s
	[INFO] 10.244.1.2:41123 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000060755s
	[INFO] 10.244.2.2:56460 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000087503s
	[INFO] 10.244.2.2:36407 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00009778s
	[INFO] 10.244.0.4:40764 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000037536s
	[INFO] 10.244.0.4:58473 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000029335s
	[INFO] 10.244.1.2:38640 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000118481s
	[INFO] 10.244.2.2:46151 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000117088s
	[INFO] 10.244.2.2:34054 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000108858s
	[INFO] 10.244.0.4:56735 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000069666s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9b4d9e684266] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:35767 - 22561 "HINFO IN 7004530829965964013.1750022571380345519. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.015451267s
	
	
	==> coredns [a06770ea62d5] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:45363 - 12851 "HINFO IN 3106403090745602942.3481725171230015744. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.010450605s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[254954895]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (15-Aug-2024 23:30:59.263) (total time: 30001ms):
	Trace[254954895]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (23:31:29.264)
	Trace[254954895]: [30.001669104s] [30.001669104s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1581349608]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (15-Aug-2024 23:30:59.262) (total time: 30003ms):
	Trace[1581349608]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (23:31:29.264)
	Trace[1581349608]: [30.003336626s] [30.003336626s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[405473182]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (15-Aug-2024 23:30:59.265) (total time: 30001ms):
	Trace[405473182]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (23:31:29.266)
	Trace[405473182]: [30.001211712s] [30.001211712s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               ha-138000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-138000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774
	                    minikube.k8s.io/name=ha-138000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_15T16_19_29_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 23:19:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-138000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 23:34:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 23:30:42 +0000   Thu, 15 Aug 2024 23:19:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 23:30:42 +0000   Thu, 15 Aug 2024 23:19:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 23:30:42 +0000   Thu, 15 Aug 2024 23:19:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 23:30:42 +0000   Thu, 15 Aug 2024 23:30:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-138000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 92a77083c2c148ceb3a6c27974611a44
	  System UUID:                bf1b4c04-0000-0000-a028-0dd0a6dcd337
	  Boot ID:                    0c496489-3552-4f3e-814f-62743ebab1dd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-wgww9              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-6f6b679f8f-dmgt5             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 coredns-6f6b679f8f-zc8jj             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 etcd-ha-138000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         14m
	  kube-system                 kindnet-77dc6                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      14m
	  kube-system                 kube-apiserver-ha-138000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-138000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-cznkn                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-138000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-138000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m32s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m16s                kube-proxy       
	  Normal  Starting                 14m                  kube-proxy       
	  Normal  NodeHasSufficientPID     14m                  kubelet          Node ha-138000 status is now: NodeHasSufficientPID
	  Normal  Starting                 14m                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                  kubelet          Node ha-138000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                  kubelet          Node ha-138000 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           14m                  node-controller  Node ha-138000 event: Registered Node ha-138000 in Controller
	  Normal  NodeReady                14m                  kubelet          Node ha-138000 status is now: NodeReady
	  Normal  RegisteredNode           13m                  node-controller  Node ha-138000 event: Registered Node ha-138000 in Controller
	  Normal  RegisteredNode           12m                  node-controller  Node ha-138000 event: Registered Node ha-138000 in Controller
	  Normal  RegisteredNode           10m                  node-controller  Node ha-138000 event: Registered Node ha-138000 in Controller
	  Normal  Starting                 4m3s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m3s (x8 over 4m3s)  kubelet          Node ha-138000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m3s (x8 over 4m3s)  kubelet          Node ha-138000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m3s (x7 over 4m3s)  kubelet          Node ha-138000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m31s                node-controller  Node ha-138000 event: Registered Node ha-138000 in Controller
	  Normal  RegisteredNode           3m9s                 node-controller  Node ha-138000 event: Registered Node ha-138000 in Controller
	  Normal  RegisteredNode           2m31s                node-controller  Node ha-138000 event: Registered Node ha-138000 in Controller
	  Normal  RegisteredNode           23s                  node-controller  Node ha-138000 event: Registered Node ha-138000 in Controller
	
	
	Name:               ha-138000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-138000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774
	                    minikube.k8s.io/name=ha-138000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_15T16_20_25_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 23:20:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-138000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 23:34:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 23:30:40 +0000   Thu, 15 Aug 2024 23:20:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 23:30:40 +0000   Thu, 15 Aug 2024 23:20:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 23:30:40 +0000   Thu, 15 Aug 2024 23:20:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 23:30:40 +0000   Thu, 15 Aug 2024 23:20:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.6
	  Hostname:    ha-138000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 f9fb9b8d5e3646d78c1f55449a26b188
	  System UUID:                4cff4215-0000-0000-9139-05f05b79bce3
	  Boot ID:                    26a8e1bf-75d0-4caa-b86c-d0e6f8c9e474
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-s6zqd                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-ha-138000-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         13m
	  kube-system                 kindnet-z6mnx                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-apiserver-ha-138000-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-138000-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-tf79g                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-138000-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-138000-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   Starting                 3m33s                  kube-proxy       
	  Normal   Starting                 10m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  13m (x8 over 13m)      kubelet          Node ha-138000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x8 over 13m)      kubelet          Node ha-138000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x7 over 13m)      kubelet          Node ha-138000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           13m                    node-controller  Node ha-138000-m02 event: Registered Node ha-138000-m02 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-138000-m02 event: Registered Node ha-138000-m02 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-138000-m02 event: Registered Node ha-138000-m02 in Controller
	  Normal   NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 10m                    kubelet          Starting kubelet.
	  Warning  Rebooted                 10m                    kubelet          Node ha-138000-m02 has been rebooted, boot id: 8d4ef345-e3b6-437d-95f7-338233576a37
	  Normal   NodeHasSufficientMemory  10m                    kubelet          Node ha-138000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m                    kubelet          Node ha-138000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m                    kubelet          Node ha-138000-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                    node-controller  Node ha-138000-m02 event: Registered Node ha-138000-m02 in Controller
	  Normal   Starting                 3m44s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  3m44s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    3m43s (x8 over 3m44s)  kubelet          Node ha-138000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  3m43s (x8 over 3m44s)  kubelet          Node ha-138000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     3m43s (x7 over 3m44s)  kubelet          Node ha-138000-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3m31s                  node-controller  Node ha-138000-m02 event: Registered Node ha-138000-m02 in Controller
	  Normal   RegisteredNode           3m9s                   node-controller  Node ha-138000-m02 event: Registered Node ha-138000-m02 in Controller
	  Normal   RegisteredNode           2m31s                  node-controller  Node ha-138000-m02 event: Registered Node ha-138000-m02 in Controller
	  Normal   RegisteredNode           23s                    node-controller  Node ha-138000-m02 event: Registered Node ha-138000-m02 in Controller
	
	
	Name:               ha-138000-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-138000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774
	                    minikube.k8s.io/name=ha-138000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_15T16_21_41_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 23:21:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-138000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 23:34:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 23:31:37 +0000   Thu, 15 Aug 2024 23:31:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 23:31:37 +0000   Thu, 15 Aug 2024 23:31:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 23:31:37 +0000   Thu, 15 Aug 2024 23:31:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 23:31:37 +0000   Thu, 15 Aug 2024 23:31:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.7
	  Hostname:    ha-138000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 a589cb93968b432caa5fc365bb995740
	  System UUID:                42284b8b-0000-0000-ac7c-129bf380703a
	  Boot ID:                    3cf0bc98-5f0e-4a33-80fb-e0c2d84cf3db
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-t5sdh                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-ha-138000-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         12m
	  kube-system                 kindnet-dsvxt                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-apiserver-ha-138000-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-138000-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-kxghx                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-138000-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-138000-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m34s                  kube-proxy       
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node ha-138000-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-138000-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-138000-m03 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           12m                    node-controller  Node ha-138000-m03 event: Registered Node ha-138000-m03 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-138000-m03 event: Registered Node ha-138000-m03 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-138000-m03 event: Registered Node ha-138000-m03 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-138000-m03 event: Registered Node ha-138000-m03 in Controller
	  Normal   RegisteredNode           3m31s                  node-controller  Node ha-138000-m03 event: Registered Node ha-138000-m03 in Controller
	  Normal   RegisteredNode           3m9s                   node-controller  Node ha-138000-m03 event: Registered Node ha-138000-m03 in Controller
	  Normal   NodeNotReady             2m51s                  node-controller  Node ha-138000-m03 status is now: NodeNotReady
	  Normal   Starting                 2m38s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m38s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m38s (x3 over 2m38s)  kubelet          Node ha-138000-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m38s (x3 over 2m38s)  kubelet          Node ha-138000-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m38s (x3 over 2m38s)  kubelet          Node ha-138000-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m38s (x2 over 2m38s)  kubelet          Node ha-138000-m03 has been rebooted, boot id: 3cf0bc98-5f0e-4a33-80fb-e0c2d84cf3db
	  Normal   NodeReady                2m38s (x2 over 2m38s)  kubelet          Node ha-138000-m03 status is now: NodeReady
	  Normal   RegisteredNode           2m31s                  node-controller  Node ha-138000-m03 event: Registered Node ha-138000-m03 in Controller
	  Normal   RegisteredNode           23s                    node-controller  Node ha-138000-m03 event: Registered Node ha-138000-m03 in Controller
	
	
	Name:               ha-138000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-138000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774
	                    minikube.k8s.io/name=ha-138000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_15T16_22_36_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 23:22:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-138000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 23:34:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 23:32:40 +0000   Thu, 15 Aug 2024 23:32:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 23:32:40 +0000   Thu, 15 Aug 2024 23:32:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 23:32:40 +0000   Thu, 15 Aug 2024 23:32:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 23:32:40 +0000   Thu, 15 Aug 2024 23:32:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.8
	  Hostname:    ha-138000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 f4edcad8d76a442b9919d65bbd5ebb03
	  System UUID:                e49846a0-0000-0000-a846-8a8b2da04ea9
	  Boot ID:                    7d49d130-2f84-43a9-9c3e-7a69f44367c4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-m887r       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      11m
	  kube-system                 kube-proxy-qpth7    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 93s                kube-proxy       
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     11m (x2 over 11m)  kubelet          Node ha-138000-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    11m (x2 over 11m)  kubelet          Node ha-138000-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  11m (x2 over 11m)  kubelet          Node ha-138000-m04 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           11m                node-controller  Node ha-138000-m04 event: Registered Node ha-138000-m04 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-138000-m04 event: Registered Node ha-138000-m04 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-138000-m04 event: Registered Node ha-138000-m04 in Controller
	  Normal   NodeReady                11m                kubelet          Node ha-138000-m04 status is now: NodeReady
	  Normal   RegisteredNode           10m                node-controller  Node ha-138000-m04 event: Registered Node ha-138000-m04 in Controller
	  Normal   RegisteredNode           3m31s              node-controller  Node ha-138000-m04 event: Registered Node ha-138000-m04 in Controller
	  Normal   RegisteredNode           3m9s               node-controller  Node ha-138000-m04 event: Registered Node ha-138000-m04 in Controller
	  Normal   NodeNotReady             2m51s              node-controller  Node ha-138000-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           2m31s              node-controller  Node ha-138000-m04 event: Registered Node ha-138000-m04 in Controller
	  Normal   Starting                 95s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  95s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 95s (x3 over 95s)  kubelet          Node ha-138000-m04 has been rebooted, boot id: 7d49d130-2f84-43a9-9c3e-7a69f44367c4
	  Normal   NodeHasSufficientMemory  95s (x4 over 95s)  kubelet          Node ha-138000-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    95s (x4 over 95s)  kubelet          Node ha-138000-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     95s (x4 over 95s)  kubelet          Node ha-138000-m04 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             95s                kubelet          Node ha-138000-m04 status is now: NodeNotReady
	  Normal   NodeReady                95s (x2 over 95s)  kubelet          Node ha-138000-m04 status is now: NodeReady
	  Normal   RegisteredNode           23s                node-controller  Node ha-138000-m04 event: Registered Node ha-138000-m04 in Controller
	
	
	Name:               ha-138000-m05
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-138000-m05
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774
	                    minikube.k8s.io/name=ha-138000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_15T16_33_47_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 23:33:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-138000-m05
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 23:34:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 23:34:15 +0000   Thu, 15 Aug 2024 23:33:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 23:34:15 +0000   Thu, 15 Aug 2024 23:33:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 23:34:15 +0000   Thu, 15 Aug 2024 23:33:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 23:34:15 +0000   Thu, 15 Aug 2024 23:34:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.9
	  Hostname:    ha-138000-m05
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 f3dfa27002394276aaf2f2145134003c
	  System UUID:                6c6a4a36-0000-0000-8ab7-05c1068f3e22
	  Boot ID:                    9c557ac2-9d33-48bd-8957-21ce53b8339d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-ha-138000-m05                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         29s
	  kube-system                 kindnet-qdhwz                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      31s
	  kube-system                 kube-apiserver-ha-138000-m05             250m (12%)    0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-controller-manager-ha-138000-m05    200m (10%)    0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-proxy-dwbgv                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-scheduler-ha-138000-m05             100m (5%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-vip-ha-138000-m05                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27s                kube-proxy       
	  Normal  RegisteredNode           31s                node-controller  Node ha-138000-m05 event: Registered Node ha-138000-m05 in Controller
	  Normal  NodeHasSufficientMemory  31s (x8 over 31s)  kubelet          Node ha-138000-m05 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s (x8 over 31s)  kubelet          Node ha-138000-m05 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s (x7 over 31s)  kubelet          Node ha-138000-m05 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  31s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           29s                node-controller  Node ha-138000-m05 event: Registered Node ha-138000-m05 in Controller
	  Normal  RegisteredNode           26s                node-controller  Node ha-138000-m05 event: Registered Node ha-138000-m05 in Controller
	  Normal  RegisteredNode           23s                node-controller  Node ha-138000-m05 event: Registered Node ha-138000-m05 in Controller
	
	
	==> dmesg <==
	[  +0.035773] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.007968] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.680855] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000003] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.006866] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Aug15 23:30] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.162045] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.989029] systemd-fstab-generator[468]: Ignoring "noauto" option for root device
	[  +0.101466] systemd-fstab-generator[487]: Ignoring "noauto" option for root device
	[  +1.930620] systemd-fstab-generator[1017]: Ignoring "noauto" option for root device
	[  +0.060770] kauditd_printk_skb: 79 callbacks suppressed
	[  +0.229646] systemd-fstab-generator[1112]: Ignoring "noauto" option for root device
	[  +0.119765] systemd-fstab-generator[1124]: Ignoring "noauto" option for root device
	[  +0.123401] systemd-fstab-generator[1138]: Ignoring "noauto" option for root device
	[  +2.409334] systemd-fstab-generator[1357]: Ignoring "noauto" option for root device
	[  +0.114639] systemd-fstab-generator[1369]: Ignoring "noauto" option for root device
	[  +0.103538] systemd-fstab-generator[1381]: Ignoring "noauto" option for root device
	[  +0.135144] systemd-fstab-generator[1396]: Ignoring "noauto" option for root device
	[  +0.456371] systemd-fstab-generator[1560]: Ignoring "noauto" option for root device
	[  +6.803779] kauditd_printk_skb: 234 callbacks suppressed
	[ +21.488008] kauditd_printk_skb: 40 callbacks suppressed
	[ +18.019929] kauditd_printk_skb: 21 callbacks suppressed
	[Aug15 23:31] kauditd_printk_skb: 55 callbacks suppressed
	
	
	==> etcd [59dac0b44544] <==
	{"level":"info","ts":"2024-08-15T23:29:46.384063Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1864] sent MsgPreVote request to 2399e7dba5b18dfe at term 2"}
	{"level":"info","ts":"2024-08-15T23:29:46.384495Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1864] sent MsgPreVote request to c8daa22dc1df7d56 at term 2"}
	{"level":"warn","ts":"2024-08-15T23:29:46.408477Z","caller":"etcdserver/server.go:2139","msg":"failed to publish local member to cluster through raft","local-member-id":"b8c6c7563d17d844","local-member-attributes":"{Name:ha-138000 ClientURLs:[https://192.169.0.5:2379]}","request-path":"/0/members/b8c6c7563d17d844/attributes","publish-timeout":"7s","error":"etcdserver: request timed out"}
	{"level":"warn","ts":"2024-08-15T23:29:46.415071Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"2399e7dba5b18dfe","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T23:29:46.415120Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"2399e7dba5b18dfe","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T23:29:46.419833Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c8daa22dc1df7d56","rtt":"0s","error":"dial tcp 192.169.0.7:2380: i/o timeout"}
	{"level":"warn","ts":"2024-08-15T23:29:46.419980Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"c8daa22dc1df7d56","rtt":"0s","error":"dial tcp 192.169.0.7:2380: i/o timeout"}
	{"level":"warn","ts":"2024-08-15T23:29:46.732045Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740419357201672,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-15T23:29:47.233019Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740419357201672,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-08-15T23:29:47.382392Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-15T23:29:47.382847Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-15T23:29:47.383118Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-08-15T23:29:47.383277Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1864] sent MsgPreVote request to 2399e7dba5b18dfe at term 2"}
	{"level":"info","ts":"2024-08-15T23:29:47.383307Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1864] sent MsgPreVote request to c8daa22dc1df7d56 at term 2"}
	{"level":"warn","ts":"2024-08-15T23:29:47.734052Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740419357201672,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-15T23:29:48.244565Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740419357201672,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-08-15T23:29:48.381923Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-15T23:29:48.382007Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-15T23:29:48.382026Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-08-15T23:29:48.382045Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1864] sent MsgPreVote request to 2399e7dba5b18dfe at term 2"}
	{"level":"info","ts":"2024-08-15T23:29:48.382066Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1864] sent MsgPreVote request to c8daa22dc1df7d56 at term 2"}
	{"level":"warn","ts":"2024-08-15T23:29:48.745537Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740419357201672,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-15T23:29:49.013739Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"4.788785781s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-08-15T23:29:49.013790Z","caller":"traceutil/trace.go:171","msg":"trace[283476530] range","detail":"{range_begin:; range_end:; }","duration":"4.78884981s","start":"2024-08-15T23:29:44.224933Z","end":"2024-08-15T23:29:49.013782Z","steps":["trace[283476530] 'agreement among raft nodes before linearized reading'  (duration: 4.788783568s)"],"step_count":1}
	{"level":"error","ts":"2024-08-15T23:29:49.013846Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[-]linearizable_read failed: context canceled\n[+]data_corruption ok\n[+]serializable_read ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	
	
	==> etcd [67b207257b40] <==
	{"level":"info","ts":"2024-08-15T23:31:38.864626Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c8daa22dc1df7d56"}
	{"level":"warn","ts":"2024-08-15T23:31:40.245395Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c8daa22dc1df7d56","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"info","ts":"2024-08-15T23:33:44.790998Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 switched to configuration voters=(2565336393327939070 13314548521573537860 14473058669918387542) learners=(16392460569644178297)"}
	{"level":"info","ts":"2024-08-15T23:33:44.791775Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844","added-peer-id":"e37db809803d1b79","added-peer-peer-urls":["https://192.169.0.9:2380"]}
	{"level":"info","ts":"2024-08-15T23:33:44.791839Z","caller":"rafthttp/peer.go:133","msg":"starting remote peer","remote-peer-id":"e37db809803d1b79"}
	{"level":"info","ts":"2024-08-15T23:33:44.792046Z","caller":"rafthttp/pipeline.go:72","msg":"started HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"e37db809803d1b79"}
	{"level":"info","ts":"2024-08-15T23:33:44.792379Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"e37db809803d1b79"}
	{"level":"info","ts":"2024-08-15T23:33:44.792754Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"e37db809803d1b79"}
	{"level":"info","ts":"2024-08-15T23:33:44.792761Z","caller":"rafthttp/peer.go:137","msg":"started remote peer","remote-peer-id":"e37db809803d1b79"}
	{"level":"info","ts":"2024-08-15T23:33:44.792771Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"e37db809803d1b79"}
	{"level":"info","ts":"2024-08-15T23:33:44.792821Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"e37db809803d1b79"}
	{"level":"info","ts":"2024-08-15T23:33:44.793195Z","caller":"rafthttp/transport.go:317","msg":"added remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"e37db809803d1b79","remote-peer-urls":["https://192.169.0.9:2380"]}
	{"level":"warn","ts":"2024-08-15T23:33:44.868346Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"e37db809803d1b79","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2024-08-15T23:33:45.370156Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"e37db809803d1b79","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2024-08-15T23:33:45.861476Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"e37db809803d1b79","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2024-08-15T23:33:45.911420Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"e37db809803d1b79"}
	{"level":"info","ts":"2024-08-15T23:33:45.915155Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"e37db809803d1b79"}
	{"level":"info","ts":"2024-08-15T23:33:45.928161Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"e37db809803d1b79"}
	{"level":"info","ts":"2024-08-15T23:33:45.965642Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"e37db809803d1b79","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-15T23:33:45.965845Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"e37db809803d1b79"}
	{"level":"info","ts":"2024-08-15T23:33:45.971954Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"e37db809803d1b79","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-15T23:33:45.972027Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"e37db809803d1b79"}
	{"level":"info","ts":"2024-08-15T23:33:46.864082Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 switched to configuration voters=(2565336393327939070 13314548521573537860 14473058669918387542 16392460569644178297)"}
	{"level":"info","ts":"2024-08-15T23:33:46.864762Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844"}
	{"level":"info","ts":"2024-08-15T23:33:46.864975Z","caller":"etcdserver/server.go:1996","msg":"applied a configuration change through raft","local-member-id":"b8c6c7563d17d844","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"e37db809803d1b79"}
	
	
	==> kernel <==
	 23:34:16 up 4 min,  0 users,  load average: 0.17, 0.19, 0.09
	Linux ha-138000 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [c2a16126718b] <==
	I0815 23:23:47.704130       1 main.go:322] Node ha-138000-m04 has CIDR [10.244.3.0/24] 
	I0815 23:23:57.712115       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0815 23:23:57.712139       1 main.go:299] handling current node
	I0815 23:23:57.712152       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0815 23:23:57.712157       1 main.go:322] Node ha-138000-m02 has CIDR [10.244.1.0/24] 
	I0815 23:23:57.712420       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0815 23:23:57.712543       1 main.go:322] Node ha-138000-m03 has CIDR [10.244.2.0/24] 
	I0815 23:23:57.712720       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0815 23:23:57.712823       1 main.go:322] Node ha-138000-m04 has CIDR [10.244.3.0/24] 
	I0815 23:24:07.712424       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0815 23:24:07.712474       1 main.go:299] handling current node
	I0815 23:24:07.712488       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0815 23:24:07.712494       1 main.go:322] Node ha-138000-m02 has CIDR [10.244.1.0/24] 
	I0815 23:24:07.712623       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0815 23:24:07.712704       1 main.go:322] Node ha-138000-m03 has CIDR [10.244.2.0/24] 
	I0815 23:24:07.712814       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0815 23:24:07.712851       1 main.go:322] Node ha-138000-m04 has CIDR [10.244.3.0/24] 
	I0815 23:24:17.705680       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0815 23:24:17.705716       1 main.go:322] Node ha-138000-m02 has CIDR [10.244.1.0/24] 
	I0815 23:24:17.706225       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0815 23:24:17.706282       1 main.go:322] Node ha-138000-m03 has CIDR [10.244.2.0/24] 
	I0815 23:24:17.706514       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0815 23:24:17.706582       1 main.go:322] Node ha-138000-m04 has CIDR [10.244.3.0/24] 
	I0815 23:24:17.706957       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0815 23:24:17.707108       1 main.go:299] handling current node
	
	
	==> kindnet [d35ee4327270] <==
	I0815 23:33:50.105489       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0815 23:33:50.105517       1 main.go:322] Node ha-138000-m04 has CIDR [10.244.3.0/24] 
	I0815 23:33:50.105672       1 main.go:295] Handling node with IPs: map[192.169.0.9:{}]
	I0815 23:33:50.105703       1 main.go:322] Node ha-138000-m05 has CIDR [10.244.4.0/24] 
	I0815 23:33:50.105988       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.4.0/24 Src: <nil> Gw: 192.169.0.9 Flags: [] Table: 0} 
	I0815 23:34:00.105537       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0815 23:34:00.105694       1 main.go:299] handling current node
	I0815 23:34:00.105742       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0815 23:34:00.105770       1 main.go:322] Node ha-138000-m02 has CIDR [10.244.1.0/24] 
	I0815 23:34:00.106116       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0815 23:34:00.106196       1 main.go:322] Node ha-138000-m03 has CIDR [10.244.2.0/24] 
	I0815 23:34:00.106364       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0815 23:34:00.106472       1 main.go:322] Node ha-138000-m04 has CIDR [10.244.3.0/24] 
	I0815 23:34:00.106635       1 main.go:295] Handling node with IPs: map[192.169.0.9:{}]
	I0815 23:34:00.106705       1 main.go:322] Node ha-138000-m05 has CIDR [10.244.4.0/24] 
	I0815 23:34:10.106586       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0815 23:34:10.106637       1 main.go:299] handling current node
	I0815 23:34:10.106650       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0815 23:34:10.106656       1 main.go:322] Node ha-138000-m02 has CIDR [10.244.1.0/24] 
	I0815 23:34:10.107007       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0815 23:34:10.107108       1 main.go:322] Node ha-138000-m03 has CIDR [10.244.2.0/24] 
	I0815 23:34:10.107444       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0815 23:34:10.107485       1 main.go:322] Node ha-138000-m04 has CIDR [10.244.3.0/24] 
	I0815 23:34:10.107537       1 main.go:295] Handling node with IPs: map[192.169.0.9:{}]
	I0815 23:34:10.107543       1 main.go:322] Node ha-138000-m05 has CIDR [10.244.4.0/24] 
	
	
	==> kube-apiserver [5ed11c46e0eb] <==
	I0815 23:29:32.056397       1 options.go:228] external host was not specified, using 192.169.0.5
	I0815 23:29:32.057840       1 server.go:142] Version: v1.31.0
	I0815 23:29:32.057961       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 23:29:32.445995       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0815 23:29:32.449536       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0815 23:29:32.452083       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0815 23:29:32.452114       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0815 23:29:32.452276       1 instance.go:232] Using reconciler: lease
	W0815 23:29:49.041556       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:33594->127.0.0.1:2379: read: connection reset by peer"
	W0815 23:29:49.041696       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:33564->127.0.0.1:2379: read: connection reset by peer"
	W0815 23:29:49.041767       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:33580->127.0.0.1:2379: read: connection reset by peer"
	W0815 23:29:50.044022       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 23:29:50.044031       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 23:29:50.044267       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 23:29:51.372028       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 23:29:51.388445       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 23:29:51.855782       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0815 23:29:52.453885       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [a6baf6e21d6c] <==
	I0815 23:30:40.344140       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0815 23:30:40.344259       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0815 23:30:40.418768       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0815 23:30:40.419548       1 shared_informer.go:320] Caches are synced for configmaps
	I0815 23:30:40.420315       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0815 23:30:40.420931       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0815 23:30:40.424034       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0815 23:30:40.424129       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0815 23:30:40.424470       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0815 23:30:40.424883       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0815 23:30:40.425391       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0815 23:30:40.425745       1 aggregator.go:171] initial CRD sync complete...
	I0815 23:30:40.425776       1 autoregister_controller.go:144] Starting autoregister controller
	I0815 23:30:40.425782       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0815 23:30:40.425786       1 cache.go:39] Caches are synced for autoregister controller
	I0815 23:30:40.429758       1 shared_informer.go:320] Caches are synced for node_authorizer
	W0815 23:30:40.433000       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.6]
	I0815 23:30:40.451364       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0815 23:30:40.451641       1 policy_source.go:224] refreshing policies
	I0815 23:30:40.467536       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0815 23:30:40.536982       1 controller.go:615] quota admission added evaluator for: endpoints
	I0815 23:30:40.548680       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0815 23:30:40.556609       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0815 23:30:41.331073       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0815 23:30:41.666666       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.5]
	
	
	==> kube-controller-manager [2ed9ae042726] <==
	I0815 23:30:20.677986       1 serving.go:386] Generated self-signed cert in-memory
	I0815 23:30:20.928931       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0815 23:30:20.928987       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 23:30:20.930507       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0815 23:30:20.930593       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0815 23:30:20.931118       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0815 23:30:20.931317       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0815 23:30:40.940723       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-controller-manager [f4a0ec142726] <==
	E0815 23:33:44.354831       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-wqjwr failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-wqjwr\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0815 23:33:44.494010       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-138000-m05\" does not exist"
	I0815 23:33:44.494178       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-138000-m04"
	I0815 23:33:44.504885       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-138000-m05" podCIDRs=["10.244.4.0/24"]
	I0815 23:33:44.504924       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m05"
	I0815 23:33:44.505303       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m05"
	I0815 23:33:44.549041       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m05"
	I0815 23:33:44.744603       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m05"
	I0815 23:33:44.817084       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m05"
	I0815 23:33:46.814638       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-138000-m05"
	I0815 23:33:46.815166       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m05"
	I0815 23:33:46.866626       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m05"
	I0815 23:33:47.077137       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m05"
	I0815 23:33:47.734826       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m05"
	I0815 23:33:47.818103       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m05"
	I0815 23:33:49.474243       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m05"
	I0815 23:33:49.514611       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m05"
	I0815 23:33:52.481257       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m05"
	I0815 23:33:52.570383       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m05"
	I0815 23:33:54.926748       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m05"
	I0815 23:34:04.664748       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-138000-m04"
	I0815 23:34:04.665628       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m05"
	I0815 23:34:04.674364       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m05"
	I0815 23:34:04.780604       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m05"
	I0815 23:34:15.243177       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m05"
	
	
	==> kube-proxy [3102e608c7d6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 23:30:59.351348       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0815 23:30:59.378221       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0815 23:30:59.378378       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 23:30:59.417171       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0815 23:30:59.417213       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0815 23:30:59.417230       1 server_linux.go:169] "Using iptables Proxier"
	I0815 23:30:59.420831       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 23:30:59.421491       1 server.go:483] "Version info" version="v1.31.0"
	I0815 23:30:59.421522       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 23:30:59.424760       1 config.go:197] "Starting service config controller"
	I0815 23:30:59.425626       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 23:30:59.426090       1 config.go:104] "Starting endpoint slice config controller"
	I0815 23:30:59.426116       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 23:30:59.427803       1 config.go:326] "Starting node config controller"
	I0815 23:30:59.428510       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 23:30:59.526834       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0815 23:30:59.526859       1 shared_informer.go:320] Caches are synced for service config
	I0815 23:30:59.528661       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [fc2e141007ef] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 23:19:33.922056       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0815 23:19:33.939645       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0815 23:19:33.939881       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 23:19:33.966815       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0815 23:19:33.966963       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0815 23:19:33.967061       1 server_linux.go:169] "Using iptables Proxier"
	I0815 23:19:33.969119       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 23:19:33.969437       1 server.go:483] "Version info" version="v1.31.0"
	I0815 23:19:33.969466       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 23:19:33.970289       1 config.go:197] "Starting service config controller"
	I0815 23:19:33.970403       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 23:19:33.970441       1 config.go:104] "Starting endpoint slice config controller"
	I0815 23:19:33.970446       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 23:19:33.970870       1 config.go:326] "Starting node config controller"
	I0815 23:19:33.970895       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 23:19:34.070930       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0815 23:19:34.070930       1 shared_informer.go:320] Caches are synced for node config
	I0815 23:19:34.070944       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [ac6935271595] <==
	W0815 23:29:03.654257       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.169.0.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0815 23:29:03.654675       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.169.0.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0815 23:29:04.192220       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0815 23:29:04.192311       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0815 23:29:07.683875       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0815 23:29:07.683942       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0815 23:29:07.708489       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.169.0.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0815 23:29:07.708791       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.169.0.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0815 23:29:17.257133       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0815 23:29:17.257240       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0815 23:29:26.626316       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0815 23:29:26.626443       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0815 23:29:29.967116       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.169.0.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0815 23:29:29.967155       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.169.0.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0815 23:29:42.147720       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.169.0.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	E0815 23:29:42.148149       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.169.0.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError"
	W0815 23:29:43.616204       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	E0815 23:29:43.616440       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError"
	W0815 23:29:45.922991       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	E0815 23:29:45.923106       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError"
	E0815 23:29:49.027901       1 server.go:267] "waiting for handlers to sync" err="context canceled"
	I0815 23:29:49.028326       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0815 23:29:49.028478       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0815 23:29:49.028500       1 shared_informer.go:316] "Unhandled Error" err="unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file" logger="UnhandledError"
	E0815 23:29:49.029058       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [c2ddb52a9846] <==
	I0815 23:30:20.706878       1 serving.go:386] Generated self-signed cert in-memory
	W0815 23:30:31.075526       1 authentication.go:370] Error looking up in-cluster authentication configuration: Get "https://192.169.0.5:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W0815 23:30:31.075552       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0815 23:30:31.075556       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0815 23:30:40.370669       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0815 23:30:40.370712       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 23:30:40.375435       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0815 23:30:40.379182       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0815 23:30:40.379313       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0815 23:30:40.379473       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0815 23:30:40.480276       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0815 23:33:44.560164       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-446bp\": pod kube-proxy-446bp is already assigned to node \"ha-138000-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-446bp" node="ha-138000-m05"
	E0815 23:33:44.560266       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-446bp\": pod kube-proxy-446bp is already assigned to node \"ha-138000-m05\"" pod="kube-system/kube-proxy-446bp"
	E0815 23:33:44.557333       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-dwbgv\": pod kube-proxy-dwbgv is already assigned to node \"ha-138000-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-dwbgv" node="ha-138000-m05"
	E0815 23:33:44.561697       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod b2af37e5-4561-41a8-abff-c4b6f4042f0f(kube-system/kube-proxy-dwbgv) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-dwbgv"
	E0815 23:33:44.566064       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-dwbgv\": pod kube-proxy-dwbgv is already assigned to node \"ha-138000-m05\"" pod="kube-system/kube-proxy-dwbgv"
	I0815 23:33:44.566120       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-dwbgv" node="ha-138000-m05"
	E0815 23:33:44.566395       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-qdhwz\": pod kindnet-qdhwz is already assigned to node \"ha-138000-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-qdhwz" node="ha-138000-m05"
	E0815 23:33:44.566731       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-qdhwz\": pod kindnet-qdhwz is already assigned to node \"ha-138000-m05\"" pod="kube-system/kindnet-qdhwz"
	
	
	==> kubelet <==
	Aug 15 23:30:59 ha-138000 kubelet[1567]: I0815 23:30:59.824309    1567 scope.go:117] "RemoveContainer" containerID="6a1122913bb1811dd9cfff9fde8c221a2c969f80db1f0bcc1a66f58faaa88395"
	Aug 15 23:31:00 ha-138000 kubelet[1567]: I0815 23:31:00.825729    1567 scope.go:117] "RemoveContainer" containerID="8f20284cd3969cd69aa4dd7eb37b8d05c7df4f53aa8c6f636949fd401174eba1"
	Aug 15 23:31:01 ha-138000 kubelet[1567]: I0815 23:31:01.825360    1567 scope.go:117] "RemoveContainer" containerID="42f5d82b004174c93ffa1441e156ff5ca6d23b9457598805927d06b8823a41bd"
	Aug 15 23:31:03 ha-138000 kubelet[1567]: I0815 23:31:03.825285    1567 scope.go:117] "RemoveContainer" containerID="2ed9ae04272666896274c0cc9cbac7e240c18a02b0b35eaab975e10a79d1a635"
	Aug 15 23:31:12 ha-138000 kubelet[1567]: E0815 23:31:12.861012    1567 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 15 23:31:12 ha-138000 kubelet[1567]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 15 23:31:12 ha-138000 kubelet[1567]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 15 23:31:12 ha-138000 kubelet[1567]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 15 23:31:12 ha-138000 kubelet[1567]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 15 23:31:12 ha-138000 kubelet[1567]: I0815 23:31:12.976621    1567 scope.go:117] "RemoveContainer" containerID="e919017e14bb91f5bec7b5fdf0351f27904f841341d654e814d90d000a091f26"
	Aug 15 23:32:12 ha-138000 kubelet[1567]: E0815 23:32:12.862060    1567 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 15 23:32:12 ha-138000 kubelet[1567]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 15 23:32:12 ha-138000 kubelet[1567]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 15 23:32:12 ha-138000 kubelet[1567]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 15 23:32:12 ha-138000 kubelet[1567]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 15 23:33:12 ha-138000 kubelet[1567]: E0815 23:33:12.860851    1567 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 15 23:33:12 ha-138000 kubelet[1567]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 15 23:33:12 ha-138000 kubelet[1567]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 15 23:33:12 ha-138000 kubelet[1567]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 15 23:33:12 ha-138000 kubelet[1567]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 15 23:34:12 ha-138000 kubelet[1567]: E0815 23:34:12.860978    1567 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 15 23:34:12 ha-138000 kubelet[1567]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 15 23:34:12 ha-138000 kubelet[1567]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 15 23:34:12 ha-138000 kubelet[1567]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 15 23:34:12 ha-138000 kubelet[1567]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-138000 -n ha-138000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-138000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (81.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (4.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:304: expected profile "ha-138000" in json of 'profile list' to include 4 nodes but have 5 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-138000\",\"Status\":\"HAppy\",\"Config\":{\"Name\":\"ha-138000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperkit\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerP
ort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-138000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.169.0.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.169.0.5\",\"Port\":8443,\"KubernetesVersion\":
\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.169.0.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.169.0.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.169.0.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true},{\"Name\":\"m05\",\"IP\":\"192.169.0.9\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"i
ngress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608
000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-138000 -n ha-138000
helpers_test.go:244: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-138000 logs -n 25: (3.508737191s)
helpers_test.go:252: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-138000 ssh -n                                                                                                             | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n ha-138000-m04 sudo cat                                                                                      | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | /home/docker/cp-test_ha-138000-m03_ha-138000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-138000 cp testdata/cp-test.txt                                                                                            | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n                                                                                                             | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-138000 cp ha-138000-m04:/home/docker/cp-test.txt                                                                          | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile1119105544/001/cp-test_ha-138000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n                                                                                                             | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-138000 cp ha-138000-m04:/home/docker/cp-test.txt                                                                          | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000:/home/docker/cp-test_ha-138000-m04_ha-138000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n                                                                                                             | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n ha-138000 sudo cat                                                                                          | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | /home/docker/cp-test_ha-138000-m04_ha-138000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-138000 cp ha-138000-m04:/home/docker/cp-test.txt                                                                          | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m02:/home/docker/cp-test_ha-138000-m04_ha-138000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n                                                                                                             | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n ha-138000-m02 sudo cat                                                                                      | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | /home/docker/cp-test_ha-138000-m04_ha-138000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-138000 cp ha-138000-m04:/home/docker/cp-test.txt                                                                          | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m03:/home/docker/cp-test_ha-138000-m04_ha-138000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n                                                                                                             | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | ha-138000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-138000 ssh -n ha-138000-m03 sudo cat                                                                                      | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | /home/docker/cp-test_ha-138000-m04_ha-138000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-138000 node stop m02 -v=7                                                                                                 | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:23 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-138000 node start m02 -v=7                                                                                                | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:23 PDT | 15 Aug 24 16:24 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-138000 -v=7                                                                                                       | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:24 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | -p ha-138000 -v=7                                                                                                            | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:24 PDT | 15 Aug 24 16:24 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-138000 --wait=true -v=7                                                                                                | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:24 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-138000                                                                                                            | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:26 PDT |                     |
	| node    | ha-138000 node delete m03 -v=7                                                                                               | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:26 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | ha-138000 stop -v=7                                                                                                          | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:27 PDT | 15 Aug 24 16:29 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-138000 --wait=true                                                                                                     | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:29 PDT | 15 Aug 24 16:32 PDT |
	|         | -v=7 --alsologtostderr                                                                                                       |           |         |         |                     |                     |
	|         | --driver=hyperkit                                                                                                            |           |         |         |                     |                     |
	| node    | add -p ha-138000                                                                                                             | ha-138000 | jenkins | v1.33.1 | 15 Aug 24 16:32 PDT | 15 Aug 24 16:34 PDT |
	|         | --control-plane -v=7                                                                                                         |           |         |         |                     |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 16:29:54
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 16:29:54.033682    3848 out.go:345] Setting OutFile to fd 1 ...
	I0815 16:29:54.033848    3848 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:29:54.033854    3848 out.go:358] Setting ErrFile to fd 2...
	I0815 16:29:54.033858    3848 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:29:54.034027    3848 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-977/.minikube/bin
	I0815 16:29:54.035457    3848 out.go:352] Setting JSON to false
	I0815 16:29:54.058003    3848 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":1765,"bootTime":1723762829,"procs":428,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0815 16:29:54.058095    3848 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 16:29:54.080014    3848 out.go:177] * [ha-138000] minikube v1.33.1 on Darwin 14.6.1
	I0815 16:29:54.122634    3848 out.go:177]   - MINIKUBE_LOCATION=19452
	I0815 16:29:54.122696    3848 notify.go:220] Checking for updates...
	I0815 16:29:54.164406    3848 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19452-977/kubeconfig
	I0815 16:29:54.185700    3848 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0815 16:29:54.206554    3848 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 16:29:54.227614    3848 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-977/.minikube
	I0815 16:29:54.248519    3848 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 16:29:54.270441    3848 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:29:54.271125    3848 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:29:54.271225    3848 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:29:54.280836    3848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52223
	I0815 16:29:54.281188    3848 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:29:54.281595    3848 main.go:141] libmachine: Using API Version  1
	I0815 16:29:54.281610    3848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:29:54.281823    3848 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:29:54.281934    3848 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:29:54.282121    3848 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 16:29:54.282360    3848 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:29:54.282379    3848 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:29:54.290749    3848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52225
	I0815 16:29:54.291068    3848 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:29:54.291384    3848 main.go:141] libmachine: Using API Version  1
	I0815 16:29:54.291393    3848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:29:54.291633    3848 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:29:54.291762    3848 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:29:54.320542    3848 out.go:177] * Using the hyperkit driver based on existing profile
	I0815 16:29:54.362577    3848 start.go:297] selected driver: hyperkit
	I0815 16:29:54.362603    3848 start.go:901] validating driver "hyperkit" against &{Name:ha-138000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-138000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclas
s:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 16:29:54.362832    3848 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 16:29:54.363029    3848 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 16:29:54.363230    3848 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19452-977/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0815 16:29:54.372833    3848 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0815 16:29:54.376641    3848 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:29:54.376661    3848 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0815 16:29:54.379303    3848 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 16:29:54.379340    3848 cni.go:84] Creating CNI manager for ""
	I0815 16:29:54.379348    3848 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0815 16:29:54.379445    3848 start.go:340] cluster config:
	{Name:ha-138000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-138000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-t
iller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 16:29:54.379558    3848 iso.go:125] acquiring lock: {Name:mkb93fc66b60be15dffa9eb2999a4be8d8d4d218 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 16:29:54.421457    3848 out.go:177] * Starting "ha-138000" primary control-plane node in "ha-138000" cluster
	I0815 16:29:54.442393    3848 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 16:29:54.442490    3848 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19452-977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0815 16:29:54.442517    3848 cache.go:56] Caching tarball of preloaded images
	I0815 16:29:54.442747    3848 preload.go:172] Found /Users/jenkins/minikube-integration/19452-977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0815 16:29:54.442766    3848 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 16:29:54.442942    3848 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/config.json ...
	I0815 16:29:54.443891    3848 start.go:360] acquireMachinesLock for ha-138000: {Name:mkac8034c8a60617271f2754a03dcaf74fc4fef7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 16:29:54.444072    3848 start.go:364] duration metric: took 141.088µs to acquireMachinesLock for "ha-138000"
	I0815 16:29:54.444120    3848 start.go:96] Skipping create...Using existing machine configuration
	I0815 16:29:54.444137    3848 fix.go:54] fixHost starting: 
	I0815 16:29:54.444553    3848 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:29:54.444588    3848 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:29:54.453701    3848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52227
	I0815 16:29:54.454060    3848 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:29:54.454408    3848 main.go:141] libmachine: Using API Version  1
	I0815 16:29:54.454428    3848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:29:54.454668    3848 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:29:54.454795    3848 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:29:54.454900    3848 main.go:141] libmachine: (ha-138000) Calling .GetState
	I0815 16:29:54.455015    3848 main.go:141] libmachine: (ha-138000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:29:54.455069    3848 main.go:141] libmachine: (ha-138000) DBG | hyperkit pid from json: 3662
	I0815 16:29:54.455998    3848 main.go:141] libmachine: (ha-138000) DBG | hyperkit pid 3662 missing from process table
	I0815 16:29:54.456024    3848 fix.go:112] recreateIfNeeded on ha-138000: state=Stopped err=<nil>
	I0815 16:29:54.456037    3848 main.go:141] libmachine: (ha-138000) Calling .DriverName
	W0815 16:29:54.456128    3848 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 16:29:54.477408    3848 out.go:177] * Restarting existing hyperkit VM for "ha-138000" ...
	I0815 16:29:54.498281    3848 main.go:141] libmachine: (ha-138000) Calling .Start
	I0815 16:29:54.498449    3848 main.go:141] libmachine: (ha-138000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:29:54.498522    3848 main.go:141] libmachine: (ha-138000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/hyperkit.pid
	I0815 16:29:54.498549    3848 main.go:141] libmachine: (ha-138000) DBG | Using UUID bf1b12d0-37a9-4c04-a028-0dd0a6dcd337
	I0815 16:29:54.612230    3848 main.go:141] libmachine: (ha-138000) DBG | Generated MAC 66:4d:cd:54:35:15
	I0815 16:29:54.612256    3848 main.go:141] libmachine: (ha-138000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000
	I0815 16:29:54.612403    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:54 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"bf1b12d0-37a9-4c04-a028-0dd0a6dcd337", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002a9530)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0815 16:29:54.612447    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:54 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"bf1b12d0-37a9-4c04-a028-0dd0a6dcd337", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002a9530)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0815 16:29:54.612479    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:54 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "bf1b12d0-37a9-4c04-a028-0dd0a6dcd337", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/ha-138000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/tty,log=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/bzimage,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/initrd,earlyprintk=serial l
oglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000"}
	I0815 16:29:54.612534    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:54 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U bf1b12d0-37a9-4c04-a028-0dd0a6dcd337 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/ha-138000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/tty,log=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/console-ring -f kexec,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/bzimage,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset noresto
re waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000"
	I0815 16:29:54.612554    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:54 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0815 16:29:54.613954    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:54 DEBUG: hyperkit: Pid is 3862
	I0815 16:29:54.614352    3848 main.go:141] libmachine: (ha-138000) DBG | Attempt 0
	I0815 16:29:54.614367    3848 main.go:141] libmachine: (ha-138000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:29:54.614458    3848 main.go:141] libmachine: (ha-138000) DBG | hyperkit pid from json: 3862
	I0815 16:29:54.615668    3848 main.go:141] libmachine: (ha-138000) DBG | Searching for 66:4d:cd:54:35:15 in /var/db/dhcpd_leases ...
	I0815 16:29:54.615762    3848 main.go:141] libmachine: (ha-138000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0815 16:29:54.615788    3848 main.go:141] libmachine: (ha-138000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66be8f71}
	I0815 16:29:54.615808    3848 main.go:141] libmachine: (ha-138000) DBG | Found match: 66:4d:cd:54:35:15
	I0815 16:29:54.615836    3848 main.go:141] libmachine: (ha-138000) DBG | IP: 192.169.0.5
	I0815 16:29:54.615932    3848 main.go:141] libmachine: (ha-138000) Calling .GetConfigRaw
	I0815 16:29:54.616670    3848 main.go:141] libmachine: (ha-138000) Calling .GetIP
	I0815 16:29:54.616859    3848 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/config.json ...
	I0815 16:29:54.617254    3848 machine.go:93] provisionDockerMachine start ...
	I0815 16:29:54.617264    3848 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:29:54.617414    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:29:54.617528    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:29:54.617607    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:29:54.617679    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:29:54.617801    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:29:54.617967    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:29:54.618192    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0815 16:29:54.618201    3848 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 16:29:54.621800    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:54 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0815 16:29:54.673574    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:54 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0815 16:29:54.674258    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0815 16:29:54.674277    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0815 16:29:54.674284    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0815 16:29:54.674293    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0815 16:29:55.057707    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:55 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0815 16:29:55.057723    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:55 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0815 16:29:55.172245    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:55 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0815 16:29:55.172277    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:55 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0815 16:29:55.172313    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:55 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0815 16:29:55.172333    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:55 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0815 16:29:55.173142    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:55 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0815 16:29:55.173153    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:29:55 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0815 16:30:00.749814    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:30:00 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0815 16:30:00.749867    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:30:00 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0815 16:30:00.749877    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:30:00 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0815 16:30:00.774690    3848 main.go:141] libmachine: (ha-138000) DBG | 2024/08/15 16:30:00 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0815 16:30:05.697072    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 16:30:05.697084    3848 main.go:141] libmachine: (ha-138000) Calling .GetMachineName
	I0815 16:30:05.697230    3848 buildroot.go:166] provisioning hostname "ha-138000"
	I0815 16:30:05.697241    3848 main.go:141] libmachine: (ha-138000) Calling .GetMachineName
	I0815 16:30:05.697340    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:30:05.697431    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:30:05.697531    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:05.697615    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:05.697729    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:30:05.697864    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:30:05.698023    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0815 16:30:05.698032    3848 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-138000 && echo "ha-138000" | sudo tee /etc/hostname
	I0815 16:30:05.773271    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-138000
	
	I0815 16:30:05.773290    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:30:05.773430    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:30:05.773543    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:05.773660    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:05.773777    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:30:05.773935    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:30:05.774084    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0815 16:30:05.774095    3848 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-138000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-138000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-138000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 16:30:05.843913    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 16:30:05.843933    3848 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19452-977/.minikube CaCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19452-977/.minikube}
	I0815 16:30:05.843947    3848 buildroot.go:174] setting up certificates
	I0815 16:30:05.843955    3848 provision.go:84] configureAuth start
	I0815 16:30:05.843962    3848 main.go:141] libmachine: (ha-138000) Calling .GetMachineName
	I0815 16:30:05.844101    3848 main.go:141] libmachine: (ha-138000) Calling .GetIP
	I0815 16:30:05.844215    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:30:05.844315    3848 provision.go:143] copyHostCerts
	I0815 16:30:05.844350    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem
	I0815 16:30:05.844436    3848 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem, removing ...
	I0815 16:30:05.844445    3848 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem
	I0815 16:30:05.844633    3848 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem (1082 bytes)
	I0815 16:30:05.844853    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem
	I0815 16:30:05.844900    3848 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem, removing ...
	I0815 16:30:05.844906    3848 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem
	I0815 16:30:05.844989    3848 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem (1123 bytes)
	I0815 16:30:05.845165    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem
	I0815 16:30:05.845202    3848 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem, removing ...
	I0815 16:30:05.845207    3848 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem
	I0815 16:30:05.845283    3848 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem (1679 bytes)
	I0815 16:30:05.845432    3848 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem org=jenkins.ha-138000 san=[127.0.0.1 192.169.0.5 ha-138000 localhost minikube]
	I0815 16:30:06.272971    3848 provision.go:177] copyRemoteCerts
	I0815 16:30:06.273031    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 16:30:06.273048    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:30:06.273185    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:30:06.273289    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:06.273389    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:30:06.273476    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/id_rsa Username:docker}
	I0815 16:30:06.313671    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0815 16:30:06.313804    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 16:30:06.335207    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0815 16:30:06.335264    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0815 16:30:06.355028    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0815 16:30:06.355085    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 16:30:06.374691    3848 provision.go:87] duration metric: took 530.722569ms to configureAuth
	I0815 16:30:06.374705    3848 buildroot.go:189] setting minikube options for container-runtime
	I0815 16:30:06.374882    3848 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:30:06.374898    3848 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:30:06.375031    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:30:06.375135    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:30:06.375215    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:06.375302    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:06.375381    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:30:06.375501    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:30:06.375633    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0815 16:30:06.375641    3848 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0815 16:30:06.439797    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0815 16:30:06.439813    3848 buildroot.go:70] root file system type: tmpfs
	I0815 16:30:06.439885    3848 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0815 16:30:06.439896    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:30:06.440029    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:30:06.440119    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:06.440211    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:06.440322    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:30:06.440461    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:30:06.440594    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0815 16:30:06.440647    3848 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0815 16:30:06.516125    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0815 16:30:06.516150    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:30:06.516294    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:30:06.516408    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:06.516493    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:06.516594    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:30:06.516721    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:30:06.516850    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0815 16:30:06.516863    3848 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0815 16:30:08.163546    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0815 16:30:08.163562    3848 machine.go:96] duration metric: took 13.546346493s to provisionDockerMachine
	I0815 16:30:08.163573    3848 start.go:293] postStartSetup for "ha-138000" (driver="hyperkit")
	I0815 16:30:08.163581    3848 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 16:30:08.163591    3848 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:30:08.163828    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 16:30:08.163844    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:30:08.163938    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:30:08.164036    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:08.164139    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:30:08.164243    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/id_rsa Username:docker}
	I0815 16:30:08.204020    3848 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 16:30:08.207179    3848 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 16:30:08.207192    3848 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19452-977/.minikube/addons for local assets ...
	I0815 16:30:08.207302    3848 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19452-977/.minikube/files for local assets ...
	I0815 16:30:08.207487    3848 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> 14982.pem in /etc/ssl/certs
	I0815 16:30:08.207494    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> /etc/ssl/certs/14982.pem
	I0815 16:30:08.207699    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 16:30:08.215716    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem --> /etc/ssl/certs/14982.pem (1708 bytes)
	I0815 16:30:08.234526    3848 start.go:296] duration metric: took 70.944461ms for postStartSetup
	I0815 16:30:08.234554    3848 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:30:08.234725    3848 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0815 16:30:08.234737    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:30:08.234828    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:30:08.234919    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:08.235004    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:30:08.235082    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/id_rsa Username:docker}
	I0815 16:30:08.273169    3848 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0815 16:30:08.273225    3848 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0815 16:30:08.324608    3848 fix.go:56] duration metric: took 13.880521363s for fixHost
	I0815 16:30:08.324634    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:30:08.324763    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:30:08.324864    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:08.324958    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:08.325046    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:30:08.325174    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:30:08.325312    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0815 16:30:08.325319    3848 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 16:30:08.390142    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723764608.424079213
	
	I0815 16:30:08.390153    3848 fix.go:216] guest clock: 1723764608.424079213
	I0815 16:30:08.390158    3848 fix.go:229] Guest: 2024-08-15 16:30:08.424079213 -0700 PDT Remote: 2024-08-15 16:30:08.324621 -0700 PDT m=+14.326357489 (delta=99.458213ms)
	I0815 16:30:08.390181    3848 fix.go:200] guest clock delta is within tolerance: 99.458213ms
	I0815 16:30:08.390185    3848 start.go:83] releasing machines lock for "ha-138000", held for 13.946148575s
	I0815 16:30:08.390205    3848 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:30:08.390341    3848 main.go:141] libmachine: (ha-138000) Calling .GetIP
	I0815 16:30:08.390446    3848 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:30:08.390809    3848 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:30:08.390921    3848 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:30:08.390989    3848 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 16:30:08.391019    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:30:08.391075    3848 ssh_runner.go:195] Run: cat /version.json
	I0815 16:30:08.391087    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:30:08.391112    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:30:08.391203    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:30:08.391220    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:08.391315    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:30:08.391333    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:08.391411    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:30:08.391426    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/id_rsa Username:docker}
	I0815 16:30:08.391513    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/id_rsa Username:docker}
	I0815 16:30:08.423504    3848 ssh_runner.go:195] Run: systemctl --version
	I0815 16:30:08.428371    3848 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 16:30:08.479207    3848 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 16:30:08.479307    3848 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 16:30:08.492318    3848 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 16:30:08.492331    3848 start.go:495] detecting cgroup driver to use...
	I0815 16:30:08.492428    3848 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 16:30:08.510522    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0815 16:30:08.519382    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0815 16:30:08.528348    3848 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0815 16:30:08.528399    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0815 16:30:08.537505    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 16:30:08.546478    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0815 16:30:08.555462    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 16:30:08.564389    3848 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 16:30:08.573622    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0815 16:30:08.582698    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0815 16:30:08.591735    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0815 16:30:08.600760    3848 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 16:30:08.609049    3848 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 16:30:08.617235    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:30:08.722765    3848 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0815 16:30:08.746033    3848 start.go:495] detecting cgroup driver to use...
	I0815 16:30:08.746116    3848 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0815 16:30:08.759830    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 16:30:08.771599    3848 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 16:30:08.789529    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 16:30:08.802787    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0815 16:30:08.815377    3848 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0815 16:30:08.844257    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0815 16:30:08.860249    3848 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 16:30:08.875283    3848 ssh_runner.go:195] Run: which cri-dockerd
	I0815 16:30:08.878327    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0815 16:30:08.886411    3848 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0815 16:30:08.899899    3848 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0815 16:30:09.005084    3848 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0815 16:30:09.128876    3848 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0815 16:30:09.128948    3848 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0815 16:30:09.143602    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:30:09.247986    3848 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0815 16:30:11.515907    3848 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.267909782s)
	I0815 16:30:11.515971    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0815 16:30:11.526125    3848 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0815 16:30:11.539600    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0815 16:30:11.550726    3848 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0815 16:30:11.659005    3848 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0815 16:30:11.764312    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:30:11.871322    3848 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0815 16:30:11.884643    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0815 16:30:11.896838    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:30:12.002912    3848 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0815 16:30:12.062997    3848 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0815 16:30:12.063089    3848 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0815 16:30:12.067549    3848 start.go:563] Will wait 60s for crictl version
	I0815 16:30:12.067596    3848 ssh_runner.go:195] Run: which crictl
	I0815 16:30:12.070446    3848 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 16:30:12.096434    3848 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0815 16:30:12.096513    3848 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0815 16:30:12.116037    3848 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0815 16:30:12.178340    3848 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0815 16:30:12.178421    3848 main.go:141] libmachine: (ha-138000) Calling .GetIP
	I0815 16:30:12.178824    3848 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0815 16:30:12.183375    3848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 16:30:12.193025    3848 kubeadm.go:883] updating cluster {Name:ha-138000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:ha-138000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false f
reshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 16:30:12.193108    3848 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 16:30:12.193158    3848 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0815 16:30:12.206441    3848 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0815 16:30:12.206452    3848 docker.go:615] Images already preloaded, skipping extraction
	I0815 16:30:12.206519    3848 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0815 16:30:12.219546    3848 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0815 16:30:12.219565    3848 cache_images.go:84] Images are preloaded, skipping loading
	I0815 16:30:12.219576    3848 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.0 docker true true} ...
	I0815 16:30:12.219652    3848 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-138000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-138000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 16:30:12.219721    3848 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0815 16:30:12.258519    3848 cni.go:84] Creating CNI manager for ""
	I0815 16:30:12.258529    3848 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0815 16:30:12.258542    3848 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 16:30:12.258557    3848 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-138000 NodeName:ha-138000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 16:30:12.258636    3848 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-138000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 16:30:12.258649    3848 kube-vip.go:115] generating kube-vip config ...
	I0815 16:30:12.258696    3848 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0815 16:30:12.271337    3848 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0815 16:30:12.271407    3848 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0815 16:30:12.271468    3848 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 16:30:12.279197    3848 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 16:30:12.279243    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0815 16:30:12.286309    3848 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0815 16:30:12.299687    3848 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 16:30:12.313389    3848 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0815 16:30:12.327846    3848 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0815 16:30:12.341535    3848 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0815 16:30:12.344364    3848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 16:30:12.353627    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:30:12.452370    3848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 16:30:12.466830    3848 certs.go:68] Setting up /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000 for IP: 192.169.0.5
	I0815 16:30:12.466842    3848 certs.go:194] generating shared ca certs ...
	I0815 16:30:12.466852    3848 certs.go:226] acquiring lock for ca certs: {Name:mka1c88019f6064d2983ac988db71a67aeb65696 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:30:12.467038    3848 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.key
	I0815 16:30:12.467111    3848 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.key
	I0815 16:30:12.467121    3848 certs.go:256] generating profile certs ...
	I0815 16:30:12.467229    3848 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/client.key
	I0815 16:30:12.467304    3848 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.key.7af4c91a
	I0815 16:30:12.467369    3848 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.key
	I0815 16:30:12.467377    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0815 16:30:12.467397    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0815 16:30:12.467414    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0815 16:30:12.467432    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0815 16:30:12.467450    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0815 16:30:12.467479    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0815 16:30:12.467508    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0815 16:30:12.467527    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0815 16:30:12.467627    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498.pem (1338 bytes)
	W0815 16:30:12.467674    3848 certs.go:480] ignoring /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498_empty.pem, impossibly tiny 0 bytes
	I0815 16:30:12.467683    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 16:30:12.467721    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem (1082 bytes)
	I0815 16:30:12.467762    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem (1123 bytes)
	I0815 16:30:12.467793    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem (1679 bytes)
	I0815 16:30:12.467866    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem (1708 bytes)
	I0815 16:30:12.467898    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:30:12.467918    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498.pem -> /usr/share/ca-certificates/1498.pem
	I0815 16:30:12.467935    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> /usr/share/ca-certificates/14982.pem
	I0815 16:30:12.468350    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 16:30:12.503573    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 16:30:12.529609    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 16:30:12.555283    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 16:30:12.583638    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0815 16:30:12.612822    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 16:30:12.658082    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 16:30:12.709731    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 16:30:12.747480    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 16:30:12.797444    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498.pem --> /usr/share/ca-certificates/1498.pem (1338 bytes)
	I0815 16:30:12.830947    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem --> /usr/share/ca-certificates/14982.pem (1708 bytes)
	I0815 16:30:12.850811    3848 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 16:30:12.864245    3848 ssh_runner.go:195] Run: openssl version
	I0815 16:30:12.868404    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 16:30:12.876802    3848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:30:12.880151    3848 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:05 /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:30:12.880186    3848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:30:12.884283    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 16:30:12.892538    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1498.pem && ln -fs /usr/share/ca-certificates/1498.pem /etc/ssl/certs/1498.pem"
	I0815 16:30:12.900652    3848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1498.pem
	I0815 16:30:12.904017    3848 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 23:14 /usr/share/ca-certificates/1498.pem
	I0815 16:30:12.904050    3848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1498.pem
	I0815 16:30:12.908285    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1498.pem /etc/ssl/certs/51391683.0"
	I0815 16:30:12.916567    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14982.pem && ln -fs /usr/share/ca-certificates/14982.pem /etc/ssl/certs/14982.pem"
	I0815 16:30:12.924847    3848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14982.pem
	I0815 16:30:12.928159    3848 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 23:14 /usr/share/ca-certificates/14982.pem
	I0815 16:30:12.928193    3848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14982.pem
	I0815 16:30:12.932352    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14982.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 16:30:12.940679    3848 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 16:30:12.943953    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 16:30:12.948281    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 16:30:12.952498    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 16:30:12.956859    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 16:30:12.961066    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 16:30:12.965237    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 16:30:12.969424    3848 kubeadm.go:392] StartCluster: {Name:ha-138000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:ha-138000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 16:30:12.969537    3848 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0815 16:30:12.983217    3848 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 16:30:12.990985    3848 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 16:30:12.990998    3848 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 16:30:12.991037    3848 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 16:30:12.998611    3848 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 16:30:12.998906    3848 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-138000" does not appear in /Users/jenkins/minikube-integration/19452-977/kubeconfig
	I0815 16:30:12.998990    3848 kubeconfig.go:62] /Users/jenkins/minikube-integration/19452-977/kubeconfig needs updating (will repair): [kubeconfig missing "ha-138000" cluster setting kubeconfig missing "ha-138000" context setting]
	I0815 16:30:12.999150    3848 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-977/kubeconfig: {Name:mk0f04b0199f84dc20eb294d5b790451b41e43fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:30:12.999761    3848 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19452-977/kubeconfig
	I0815 16:30:12.999936    3848 kapi.go:59] client config for ha-138000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/client.key", CAFile:"/Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Use
rAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x3a6cf60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0815 16:30:13.000222    3848 cert_rotation.go:140] Starting client certificate rotation controller
	I0815 16:30:13.000394    3848 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 16:30:13.007927    3848 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0815 16:30:13.007944    3848 kubeadm.go:597] duration metric: took 16.941718ms to restartPrimaryControlPlane
	I0815 16:30:13.007950    3848 kubeadm.go:394] duration metric: took 38.534887ms to StartCluster
	I0815 16:30:13.007960    3848 settings.go:142] acquiring lock: {Name:mk694dad19d37394fa6b13c51a7dc54b62e97c12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:30:13.008036    3848 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19452-977/kubeconfig
	I0815 16:30:13.008396    3848 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-977/kubeconfig: {Name:mk0f04b0199f84dc20eb294d5b790451b41e43fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:30:13.008625    3848 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 16:30:13.008644    3848 start.go:241] waiting for startup goroutines ...
	I0815 16:30:13.008652    3848 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 16:30:13.008752    3848 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:30:13.052465    3848 out.go:177] * Enabled addons: 
	I0815 16:30:13.073695    3848 addons.go:510] duration metric: took 65.048594ms for enable addons: enabled=[]
	I0815 16:30:13.073733    3848 start.go:246] waiting for cluster config update ...
	I0815 16:30:13.073745    3848 start.go:255] writing updated cluster config ...
	I0815 16:30:13.095512    3848 out.go:201] 
	I0815 16:30:13.116951    3848 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:30:13.117068    3848 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/config.json ...
	I0815 16:30:13.139649    3848 out.go:177] * Starting "ha-138000-m02" control-plane node in "ha-138000" cluster
	I0815 16:30:13.181551    3848 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 16:30:13.181610    3848 cache.go:56] Caching tarball of preloaded images
	I0815 16:30:13.181807    3848 preload.go:172] Found /Users/jenkins/minikube-integration/19452-977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0815 16:30:13.181826    3848 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 16:30:13.181935    3848 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/config.json ...
	I0815 16:30:13.182895    3848 start.go:360] acquireMachinesLock for ha-138000-m02: {Name:mkac8034c8a60617271f2754a03dcaf74fc4fef7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 16:30:13.183018    3848 start.go:364] duration metric: took 98.069µs to acquireMachinesLock for "ha-138000-m02"
	I0815 16:30:13.183044    3848 start.go:96] Skipping create...Using existing machine configuration
	I0815 16:30:13.183051    3848 fix.go:54] fixHost starting: m02
	I0815 16:30:13.183444    3848 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:30:13.183470    3848 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:30:13.192973    3848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52251
	I0815 16:30:13.193340    3848 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:30:13.193664    3848 main.go:141] libmachine: Using API Version  1
	I0815 16:30:13.193677    3848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:30:13.193949    3848 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:30:13.194068    3848 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:30:13.194158    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetState
	I0815 16:30:13.194250    3848 main.go:141] libmachine: (ha-138000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:30:13.194330    3848 main.go:141] libmachine: (ha-138000-m02) DBG | hyperkit pid from json: 3670
	I0815 16:30:13.195266    3848 main.go:141] libmachine: (ha-138000-m02) DBG | hyperkit pid 3670 missing from process table
	I0815 16:30:13.195300    3848 fix.go:112] recreateIfNeeded on ha-138000-m02: state=Stopped err=<nil>
	I0815 16:30:13.195308    3848 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	W0815 16:30:13.195387    3848 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 16:30:13.216598    3848 out.go:177] * Restarting existing hyperkit VM for "ha-138000-m02" ...
	I0815 16:30:13.258591    3848 main.go:141] libmachine: (ha-138000-m02) Calling .Start
	I0815 16:30:13.258850    3848 main.go:141] libmachine: (ha-138000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:30:13.258951    3848 main.go:141] libmachine: (ha-138000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/hyperkit.pid
	I0815 16:30:13.260726    3848 main.go:141] libmachine: (ha-138000-m02) DBG | hyperkit pid 3670 missing from process table
	I0815 16:30:13.260746    3848 main.go:141] libmachine: (ha-138000-m02) DBG | pid 3670 is in state "Stopped"
	I0815 16:30:13.260762    3848 main.go:141] libmachine: (ha-138000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/hyperkit.pid...
	I0815 16:30:13.261090    3848 main.go:141] libmachine: (ha-138000-m02) DBG | Using UUID 4cff9b5a-9fe3-4215-9139-05f05b79bce3
	I0815 16:30:13.290755    3848 main.go:141] libmachine: (ha-138000-m02) DBG | Generated MAC 9a:c2:e9:d7:1c:58
	I0815 16:30:13.290775    3848 main.go:141] libmachine: (ha-138000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000
	I0815 16:30:13.290894    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"4cff9b5a-9fe3-4215-9139-05f05b79bce3", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003beae0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0815 16:30:13.290919    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"4cff9b5a-9fe3-4215-9139-05f05b79bce3", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003beae0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0815 16:30:13.290973    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "4cff9b5a-9fe3-4215-9139-05f05b79bce3", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/ha-138000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/tty,log=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/bzimage,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-13
8000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000"}
	I0815 16:30:13.291003    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 4cff9b5a-9fe3-4215-9139-05f05b79bce3 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/ha-138000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/tty,log=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/bzimage,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 co
nsole=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000"
	I0815 16:30:13.291039    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0815 16:30:13.292431    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 DEBUG: hyperkit: Pid is 4167
	I0815 16:30:13.292922    3848 main.go:141] libmachine: (ha-138000-m02) DBG | Attempt 0
	I0815 16:30:13.292931    3848 main.go:141] libmachine: (ha-138000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:30:13.292988    3848 main.go:141] libmachine: (ha-138000-m02) DBG | hyperkit pid from json: 4167
	I0815 16:30:13.294816    3848 main.go:141] libmachine: (ha-138000-m02) DBG | Searching for 9a:c2:e9:d7:1c:58 in /var/db/dhcpd_leases ...
	I0815 16:30:13.294866    3848 main.go:141] libmachine: (ha-138000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0815 16:30:13.294889    3848 main.go:141] libmachine: (ha-138000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:30:13.294903    3848 main.go:141] libmachine: (ha-138000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfdfcb}
	I0815 16:30:13.294915    3848 main.go:141] libmachine: (ha-138000-m02) DBG | Found match: 9a:c2:e9:d7:1c:58
	I0815 16:30:13.294931    3848 main.go:141] libmachine: (ha-138000-m02) DBG | IP: 192.169.0.6
	I0815 16:30:13.294997    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetConfigRaw
	I0815 16:30:13.295728    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetIP
	I0815 16:30:13.295920    3848 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/config.json ...
	I0815 16:30:13.296384    3848 machine.go:93] provisionDockerMachine start ...
	I0815 16:30:13.296394    3848 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:30:13.296516    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:30:13.296606    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:30:13.296695    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:13.296801    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:13.296905    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:30:13.297071    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:30:13.297242    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0815 16:30:13.297249    3848 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 16:30:13.300476    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0815 16:30:13.310276    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0815 16:30:13.311421    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0815 16:30:13.311448    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0815 16:30:13.311463    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0815 16:30:13.311475    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0815 16:30:13.698130    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0815 16:30:13.698145    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0815 16:30:13.812764    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0815 16:30:13.812785    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0815 16:30:13.812794    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0815 16:30:13.812888    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0815 16:30:13.813620    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0815 16:30:13.813637    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:13 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0815 16:30:19.405369    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:19 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0815 16:30:19.405428    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:19 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0815 16:30:19.405441    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:19 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0815 16:30:19.429063    3848 main.go:141] libmachine: (ha-138000-m02) DBG | 2024/08/15 16:30:19 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0815 16:30:24.364782    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 16:30:24.364794    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetMachineName
	I0815 16:30:24.364947    3848 buildroot.go:166] provisioning hostname "ha-138000-m02"
	I0815 16:30:24.364958    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetMachineName
	I0815 16:30:24.365057    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:30:24.365147    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:30:24.365238    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:24.365323    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:24.365453    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:30:24.365589    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:30:24.365741    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0815 16:30:24.365749    3848 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-138000-m02 && echo "ha-138000-m02" | sudo tee /etc/hostname
	I0815 16:30:24.435748    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-138000-m02
	
	I0815 16:30:24.435762    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:30:24.435893    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:30:24.435990    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:24.436082    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:24.436186    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:30:24.436313    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:30:24.436463    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0815 16:30:24.436475    3848 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-138000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-138000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-138000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 16:30:24.504475    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 16:30:24.504492    3848 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19452-977/.minikube CaCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19452-977/.minikube}
	I0815 16:30:24.504503    3848 buildroot.go:174] setting up certificates
	I0815 16:30:24.504519    3848 provision.go:84] configureAuth start
	I0815 16:30:24.504526    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetMachineName
	I0815 16:30:24.504663    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetIP
	I0815 16:30:24.504758    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:30:24.504846    3848 provision.go:143] copyHostCerts
	I0815 16:30:24.504877    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem
	I0815 16:30:24.504929    3848 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem, removing ...
	I0815 16:30:24.504935    3848 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem
	I0815 16:30:24.505124    3848 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem (1082 bytes)
	I0815 16:30:24.505339    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem
	I0815 16:30:24.505371    3848 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem, removing ...
	I0815 16:30:24.505375    3848 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem
	I0815 16:30:24.505446    3848 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem (1123 bytes)
	I0815 16:30:24.505596    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem
	I0815 16:30:24.505624    3848 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem, removing ...
	I0815 16:30:24.505628    3848 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem
	I0815 16:30:24.505696    3848 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem (1679 bytes)
	I0815 16:30:24.505845    3848 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem org=jenkins.ha-138000-m02 san=[127.0.0.1 192.169.0.6 ha-138000-m02 localhost minikube]
	I0815 16:30:24.669808    3848 provision.go:177] copyRemoteCerts
	I0815 16:30:24.669859    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 16:30:24.669875    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:30:24.670016    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:30:24.670138    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:24.670247    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:30:24.670341    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/id_rsa Username:docker}
	I0815 16:30:24.707125    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0815 16:30:24.707202    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 16:30:24.726013    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0815 16:30:24.726070    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0815 16:30:24.745370    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0815 16:30:24.745429    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 16:30:24.765407    3848 provision.go:87] duration metric: took 260.879651ms to configureAuth
	I0815 16:30:24.765419    3848 buildroot.go:189] setting minikube options for container-runtime
	I0815 16:30:24.765586    3848 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:30:24.765614    3848 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:30:24.765750    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:30:24.765841    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:30:24.765917    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:24.765992    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:24.766073    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:30:24.766180    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:30:24.766348    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0815 16:30:24.766356    3848 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0815 16:30:24.825444    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0815 16:30:24.825455    3848 buildroot.go:70] root file system type: tmpfs
	I0815 16:30:24.825535    3848 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0815 16:30:24.825546    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:30:24.825668    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:30:24.825761    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:24.825848    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:24.825931    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:30:24.826067    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:30:24.826205    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0815 16:30:24.826249    3848 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0815 16:30:24.894944    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0815 16:30:24.894961    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:30:24.895099    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:30:24.895204    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:24.895287    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:24.895382    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:30:24.895505    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:30:24.895640    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0815 16:30:24.895652    3848 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0815 16:30:26.552071    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0815 16:30:26.552086    3848 machine.go:96] duration metric: took 13.255738864s to provisionDockerMachine
	I0815 16:30:26.552093    3848 start.go:293] postStartSetup for "ha-138000-m02" (driver="hyperkit")
	I0815 16:30:26.552100    3848 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 16:30:26.552110    3848 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:30:26.552311    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 16:30:26.552326    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:30:26.552426    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:30:26.552517    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:26.552610    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:30:26.552712    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/id_rsa Username:docker}
	I0815 16:30:26.593353    3848 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 16:30:26.598425    3848 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 16:30:26.598438    3848 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19452-977/.minikube/addons for local assets ...
	I0815 16:30:26.598548    3848 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19452-977/.minikube/files for local assets ...
	I0815 16:30:26.598699    3848 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> 14982.pem in /etc/ssl/certs
	I0815 16:30:26.598705    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> /etc/ssl/certs/14982.pem
	I0815 16:30:26.598861    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 16:30:26.610066    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem --> /etc/ssl/certs/14982.pem (1708 bytes)
	I0815 16:30:26.645456    3848 start.go:296] duration metric: took 93.354607ms for postStartSetup
	I0815 16:30:26.645497    3848 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:30:26.645674    3848 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0815 16:30:26.645688    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:30:26.645776    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:30:26.645850    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:26.645933    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:30:26.646015    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/id_rsa Username:docker}
	I0815 16:30:26.683361    3848 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0815 16:30:26.683423    3848 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0815 16:30:26.737495    3848 fix.go:56] duration metric: took 13.554488062s for fixHost
	I0815 16:30:26.737525    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:30:26.737661    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:30:26.737749    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:26.737848    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:26.737943    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:30:26.738080    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:30:26.738216    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0815 16:30:26.738224    3848 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 16:30:26.796943    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723764627.049155775
	
	I0815 16:30:26.796953    3848 fix.go:216] guest clock: 1723764627.049155775
	I0815 16:30:26.796959    3848 fix.go:229] Guest: 2024-08-15 16:30:27.049155775 -0700 PDT Remote: 2024-08-15 16:30:26.737509 -0700 PDT m=+32.739307986 (delta=311.646775ms)
	I0815 16:30:26.796973    3848 fix.go:200] guest clock delta is within tolerance: 311.646775ms
	I0815 16:30:26.796977    3848 start.go:83] releasing machines lock for "ha-138000-m02", held for 13.613993837s
	I0815 16:30:26.796994    3848 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:30:26.797121    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetIP
	I0815 16:30:26.821561    3848 out.go:177] * Found network options:
	I0815 16:30:26.841357    3848 out.go:177]   - NO_PROXY=192.169.0.5
	W0815 16:30:26.862556    3848 proxy.go:119] fail to check proxy env: Error ip not in block
	I0815 16:30:26.862605    3848 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:30:26.863433    3848 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:30:26.863671    3848 main.go:141] libmachine: (ha-138000-m02) Calling .DriverName
	I0815 16:30:26.863815    3848 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 16:30:26.863856    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	W0815 16:30:26.863902    3848 proxy.go:119] fail to check proxy env: Error ip not in block
	I0815 16:30:26.863997    3848 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0815 16:30:26.864019    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHHostname
	I0815 16:30:26.864116    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:30:26.864226    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHPort
	I0815 16:30:26.864284    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:26.864479    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:30:26.864535    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHKeyPath
	I0815 16:30:26.864691    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/id_rsa Username:docker}
	I0815 16:30:26.864752    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetSSHUsername
	I0815 16:30:26.864886    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m02/id_rsa Username:docker}
	W0815 16:30:26.897510    3848 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 16:30:26.897576    3848 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 16:30:26.944949    3848 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 16:30:26.944964    3848 start.go:495] detecting cgroup driver to use...
	I0815 16:30:26.945031    3848 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 16:30:26.959965    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0815 16:30:26.969052    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0815 16:30:26.977789    3848 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0815 16:30:26.977840    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0815 16:30:26.986870    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 16:30:26.995871    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0815 16:30:27.004811    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 16:30:27.013722    3848 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 16:30:27.022692    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0815 16:30:27.031569    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0815 16:30:27.040462    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0815 16:30:27.049386    3848 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 16:30:27.057419    3848 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 16:30:27.065508    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:30:27.164154    3848 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0815 16:30:27.181165    3848 start.go:495] detecting cgroup driver to use...
	I0815 16:30:27.181250    3848 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0815 16:30:27.192595    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 16:30:27.203037    3848 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 16:30:27.216573    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 16:30:27.228211    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0815 16:30:27.239268    3848 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0815 16:30:27.258656    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0815 16:30:27.269954    3848 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 16:30:27.284667    3848 ssh_runner.go:195] Run: which cri-dockerd
	I0815 16:30:27.287552    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0815 16:30:27.295653    3848 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0815 16:30:27.309091    3848 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0815 16:30:27.403676    3848 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0815 16:30:27.500434    3848 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0815 16:30:27.500464    3848 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0815 16:30:27.514754    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:30:27.610670    3848 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0815 16:30:29.951174    3848 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.340492876s)
	I0815 16:30:29.951241    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0815 16:30:29.961656    3848 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0815 16:30:29.974207    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0815 16:30:29.984718    3848 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0815 16:30:30.078933    3848 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0815 16:30:30.191991    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:30:30.301187    3848 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0815 16:30:30.314601    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0815 16:30:30.325440    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:30:30.420867    3848 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0815 16:30:30.486340    3848 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0815 16:30:30.486435    3848 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0815 16:30:30.491068    3848 start.go:563] Will wait 60s for crictl version
	I0815 16:30:30.491127    3848 ssh_runner.go:195] Run: which crictl
	I0815 16:30:30.494150    3848 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 16:30:30.523583    3848 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0815 16:30:30.523658    3848 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0815 16:30:30.541608    3848 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0815 16:30:30.598613    3848 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0815 16:30:30.658061    3848 out.go:177]   - env NO_PROXY=192.169.0.5
	I0815 16:30:30.695353    3848 main.go:141] libmachine: (ha-138000-m02) Calling .GetIP
	I0815 16:30:30.695714    3848 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0815 16:30:30.700361    3848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 16:30:30.709893    3848 mustload.go:65] Loading cluster: ha-138000
	I0815 16:30:30.710062    3848 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:30:30.710316    3848 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:30:30.710336    3848 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:30:30.719005    3848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52273
	I0815 16:30:30.719360    3848 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:30:30.719741    3848 main.go:141] libmachine: Using API Version  1
	I0815 16:30:30.719750    3848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:30:30.719981    3848 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:30:30.720103    3848 main.go:141] libmachine: (ha-138000) Calling .GetState
	I0815 16:30:30.720187    3848 main.go:141] libmachine: (ha-138000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:30:30.720267    3848 main.go:141] libmachine: (ha-138000) DBG | hyperkit pid from json: 3862
	I0815 16:30:30.721211    3848 host.go:66] Checking if "ha-138000" exists ...
	I0815 16:30:30.721471    3848 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:30:30.721491    3848 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:30:30.729999    3848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52275
	I0815 16:30:30.730336    3848 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:30:30.730678    3848 main.go:141] libmachine: Using API Version  1
	I0815 16:30:30.730693    3848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:30:30.730926    3848 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:30:30.731056    3848 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:30:30.731175    3848 certs.go:68] Setting up /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000 for IP: 192.169.0.6
	I0815 16:30:30.731181    3848 certs.go:194] generating shared ca certs ...
	I0815 16:30:30.731197    3848 certs.go:226] acquiring lock for ca certs: {Name:mka1c88019f6064d2983ac988db71a67aeb65696 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:30:30.731336    3848 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.key
	I0815 16:30:30.731387    3848 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.key
	I0815 16:30:30.731396    3848 certs.go:256] generating profile certs ...
	I0815 16:30:30.731509    3848 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/client.key
	I0815 16:30:30.731595    3848 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.key.5f0053a1
	I0815 16:30:30.731651    3848 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.key
	I0815 16:30:30.731658    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0815 16:30:30.731679    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0815 16:30:30.731700    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0815 16:30:30.731722    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0815 16:30:30.731740    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0815 16:30:30.731768    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0815 16:30:30.731791    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0815 16:30:30.731809    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0815 16:30:30.731883    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498.pem (1338 bytes)
	W0815 16:30:30.731920    3848 certs.go:480] ignoring /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498_empty.pem, impossibly tiny 0 bytes
	I0815 16:30:30.731928    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 16:30:30.731973    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem (1082 bytes)
	I0815 16:30:30.732017    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem (1123 bytes)
	I0815 16:30:30.732045    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem (1679 bytes)
	I0815 16:30:30.732121    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem (1708 bytes)
	I0815 16:30:30.732157    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> /usr/share/ca-certificates/14982.pem
	I0815 16:30:30.732177    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:30:30.732194    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498.pem -> /usr/share/ca-certificates/1498.pem
	I0815 16:30:30.732219    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:30:30.732316    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:30:30.732406    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:30:30.732529    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:30:30.732609    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/id_rsa Username:docker}
	I0815 16:30:30.763783    3848 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0815 16:30:30.767449    3848 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0815 16:30:30.776129    3848 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0815 16:30:30.779163    3848 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0815 16:30:30.787730    3848 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0815 16:30:30.791082    3848 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0815 16:30:30.799754    3848 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0815 16:30:30.802809    3848 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0815 16:30:30.811618    3848 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0815 16:30:30.814650    3848 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0815 16:30:30.822963    3848 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0815 16:30:30.826004    3848 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0815 16:30:30.834906    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 16:30:30.854912    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 16:30:30.874577    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 16:30:30.894388    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 16:30:30.914413    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0815 16:30:30.933887    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 16:30:30.953772    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 16:30:30.973419    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 16:30:30.992862    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem --> /usr/share/ca-certificates/14982.pem (1708 bytes)
	I0815 16:30:31.012391    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 16:30:31.031916    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498.pem --> /usr/share/ca-certificates/1498.pem (1338 bytes)
	I0815 16:30:31.051694    3848 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0815 16:30:31.065167    3848 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0815 16:30:31.078573    3848 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0815 16:30:31.091997    3848 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0815 16:30:31.105622    3848 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0815 16:30:31.119143    3848 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0815 16:30:31.132670    3848 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0815 16:30:31.146406    3848 ssh_runner.go:195] Run: openssl version
	I0815 16:30:31.150444    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 16:30:31.158651    3848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:30:31.162017    3848 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:05 /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:30:31.162055    3848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:30:31.166191    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 16:30:31.174561    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1498.pem && ln -fs /usr/share/ca-certificates/1498.pem /etc/ssl/certs/1498.pem"
	I0815 16:30:31.182745    3848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1498.pem
	I0815 16:30:31.186223    3848 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 23:14 /usr/share/ca-certificates/1498.pem
	I0815 16:30:31.186262    3848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1498.pem
	I0815 16:30:31.190437    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1498.pem /etc/ssl/certs/51391683.0"
	I0815 16:30:31.198642    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14982.pem && ln -fs /usr/share/ca-certificates/14982.pem /etc/ssl/certs/14982.pem"
	I0815 16:30:31.207129    3848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14982.pem
	I0815 16:30:31.210527    3848 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 23:14 /usr/share/ca-certificates/14982.pem
	I0815 16:30:31.210565    3848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14982.pem
	I0815 16:30:31.214780    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14982.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 16:30:31.223055    3848 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 16:30:31.226404    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 16:30:31.230624    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 16:30:31.234964    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 16:30:31.239281    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 16:30:31.243508    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 16:30:31.247740    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 16:30:31.251885    3848 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.31.0 docker true true} ...
	I0815 16:30:31.251948    3848 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-138000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-138000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 16:30:31.251968    3848 kube-vip.go:115] generating kube-vip config ...
	I0815 16:30:31.251997    3848 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0815 16:30:31.264157    3848 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0815 16:30:31.264200    3848 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0815 16:30:31.264247    3848 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 16:30:31.272799    3848 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 16:30:31.272844    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0815 16:30:31.280999    3848 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0815 16:30:31.294195    3848 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 16:30:31.307421    3848 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0815 16:30:31.321201    3848 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0815 16:30:31.324137    3848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 16:30:31.334188    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:30:31.429450    3848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 16:30:31.443961    3848 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 16:30:31.444161    3848 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:30:31.465375    3848 out.go:177] * Verifying Kubernetes components...
	I0815 16:30:31.507025    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:30:31.625968    3848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 16:30:31.645410    3848 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19452-977/kubeconfig
	I0815 16:30:31.645610    3848 kapi.go:59] client config for ha-138000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/client.key", CAFile:"/Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, U
serAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x3a6cf60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0815 16:30:31.645648    3848 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0815 16:30:31.645835    3848 node_ready.go:35] waiting up to 6m0s for node "ha-138000-m02" to be "Ready" ...
	I0815 16:30:31.645920    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:31.645925    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:31.645933    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:31.645936    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:40.053028    3848 round_trippers.go:574] Response Status: 200 OK in 8407 milliseconds
	I0815 16:30:40.053934    3848 node_ready.go:49] node "ha-138000-m02" has status "Ready":"True"
	I0815 16:30:40.053949    3848 node_ready.go:38] duration metric: took 8.408123647s for node "ha-138000-m02" to be "Ready" ...
	I0815 16:30:40.053959    3848 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 16:30:40.053997    3848 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0815 16:30:40.054008    3848 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0815 16:30:40.054051    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0815 16:30:40.054057    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:40.054064    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:40.054066    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:40.076049    3848 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0815 16:30:40.083485    3848 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-dmgt5" in "kube-system" namespace to be "Ready" ...
	I0815 16:30:40.083552    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-dmgt5
	I0815 16:30:40.083559    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:40.083565    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:40.083569    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:40.090478    3848 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0815 16:30:40.091010    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:40.091019    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:40.091025    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:40.091028    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:40.094713    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:40.095017    3848 pod_ready.go:93] pod "coredns-6f6b679f8f-dmgt5" in "kube-system" namespace has status "Ready":"True"
	I0815 16:30:40.095031    3848 pod_ready.go:82] duration metric: took 11.52447ms for pod "coredns-6f6b679f8f-dmgt5" in "kube-system" namespace to be "Ready" ...
	I0815 16:30:40.095040    3848 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-zc8jj" in "kube-system" namespace to be "Ready" ...
	I0815 16:30:40.095087    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-zc8jj
	I0815 16:30:40.095094    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:40.095102    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:40.095107    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:40.101746    3848 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0815 16:30:40.102483    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:40.102492    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:40.102500    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:40.102503    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:40.105983    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:40.106569    3848 pod_ready.go:93] pod "coredns-6f6b679f8f-zc8jj" in "kube-system" namespace has status "Ready":"True"
	I0815 16:30:40.106587    3848 pod_ready.go:82] duration metric: took 11.533246ms for pod "coredns-6f6b679f8f-zc8jj" in "kube-system" namespace to be "Ready" ...
	I0815 16:30:40.106595    3848 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:30:40.106638    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000
	I0815 16:30:40.106644    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:40.106651    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:40.106654    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:40.110887    3848 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 16:30:40.111881    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:40.111893    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:40.111902    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:40.111907    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:40.114794    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:40.115181    3848 pod_ready.go:93] pod "etcd-ha-138000" in "kube-system" namespace has status "Ready":"True"
	I0815 16:30:40.115194    3848 pod_ready.go:82] duration metric: took 8.594007ms for pod "etcd-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:30:40.115201    3848 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:30:40.115242    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m02
	I0815 16:30:40.115247    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:40.115252    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:40.115256    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:40.121257    3848 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 16:30:40.121684    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:40.121694    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:40.121704    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:40.121710    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:40.125990    3848 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 16:30:40.126507    3848 pod_ready.go:93] pod "etcd-ha-138000-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 16:30:40.126520    3848 pod_ready.go:82] duration metric: took 11.312949ms for pod "etcd-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:30:40.126528    3848 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:30:40.126573    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:30:40.126579    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:40.126585    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:40.126589    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:40.129916    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:40.254208    3848 request.go:632] Waited for 123.846339ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:30:40.254247    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:30:40.254252    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:40.254262    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:40.254299    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:40.258157    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:40.258510    3848 pod_ready.go:93] pod "etcd-ha-138000-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 16:30:40.258520    3848 pod_ready.go:82] duration metric: took 131.98589ms for pod "etcd-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:30:40.258532    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:30:40.454350    3848 request.go:632] Waited for 195.778452ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000
	I0815 16:30:40.454424    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000
	I0815 16:30:40.454430    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:40.454436    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:40.454441    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:40.457270    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:40.654210    3848 request.go:632] Waited for 196.49648ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:40.654247    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:40.654254    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:40.654300    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:40.654306    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:40.662420    3848 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0815 16:30:40.662780    3848 pod_ready.go:98] node "ha-138000" hosting pod "kube-apiserver-ha-138000" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-138000" has status "Ready":"False"
	I0815 16:30:40.662798    3848 pod_ready.go:82] duration metric: took 404.260054ms for pod "kube-apiserver-ha-138000" in "kube-system" namespace to be "Ready" ...
	E0815 16:30:40.662809    3848 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-138000" hosting pod "kube-apiserver-ha-138000" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-138000" has status "Ready":"False"
	I0815 16:30:40.662819    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:30:40.854147    3848 request.go:632] Waited for 191.277341ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:40.854226    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:40.854232    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:40.854238    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:40.854243    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:40.859631    3848 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 16:30:41.054463    3848 request.go:632] Waited for 194.266573ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:41.054497    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:41.054501    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:41.054509    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:41.054513    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:41.058210    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:41.254872    3848 request.go:632] Waited for 91.867207ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:41.254917    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:41.254966    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:41.254978    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:41.254982    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:41.258343    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:41.455877    3848 request.go:632] Waited for 196.977249ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:41.455912    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:41.455919    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:41.455925    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:41.455931    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:41.457855    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:41.664056    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:41.664082    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:41.664093    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:41.664100    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:41.667876    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:41.854208    3848 request.go:632] Waited for 185.493412ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:41.854247    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:41.854253    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:41.854260    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:41.854264    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:41.856823    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:42.163578    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:42.163664    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:42.163680    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:42.163716    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:42.167135    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:42.254205    3848 request.go:632] Waited for 86.267935ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:42.254261    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:42.254269    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:42.254286    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:42.254324    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:42.257709    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:42.664326    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:42.664344    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:42.664353    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:42.664357    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:42.666960    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:42.667548    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:42.667555    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:42.667561    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:42.667564    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:42.669222    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:42.669539    3848 pod_ready.go:103] pod "kube-apiserver-ha-138000-m02" in "kube-system" namespace has status "Ready":"False"
	I0815 16:30:43.163236    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:43.163273    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:43.163281    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:43.163286    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:43.165588    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:43.166081    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:43.166088    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:43.166094    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:43.166097    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:43.167727    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:43.663181    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:43.663266    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:43.663274    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:43.663277    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:43.665851    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:43.666288    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:43.666295    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:43.666301    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:43.666305    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:43.669495    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:44.163768    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:44.163782    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:44.163788    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:44.163800    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:44.166284    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:44.166820    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:44.166828    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:44.166834    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:44.166853    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:44.169173    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:44.663006    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:44.663018    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:44.663023    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:44.663025    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:44.665460    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:44.666145    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:44.666152    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:44.666158    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:44.666162    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:44.668246    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:45.164214    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:45.164237    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:45.164314    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:45.164325    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:45.167819    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:45.168514    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:45.168521    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:45.168528    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:45.168531    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:45.170434    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:45.170836    3848 pod_ready.go:103] pod "kube-apiserver-ha-138000-m02" in "kube-system" namespace has status "Ready":"False"
	I0815 16:30:45.665030    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:45.665056    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:45.665068    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:45.665073    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:45.668540    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:45.669128    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:45.669139    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:45.669148    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:45.669152    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:45.671055    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:46.163033    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:46.163095    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:46.163108    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:46.163116    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:46.166371    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:46.166786    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:46.166793    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:46.166799    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:46.166803    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:46.168600    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:46.663767    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:46.663791    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:46.663803    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:46.663814    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:46.667030    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:46.667614    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:46.667625    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:46.667633    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:46.667637    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:46.669233    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:47.163455    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:47.163469    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:47.163475    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:47.163480    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:47.167195    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:47.167557    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:47.167565    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:47.167571    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:47.167576    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:47.170814    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:47.171266    3848 pod_ready.go:103] pod "kube-apiserver-ha-138000-m02" in "kube-system" namespace has status "Ready":"False"
	I0815 16:30:47.663794    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:47.663820    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:47.663831    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:47.663839    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:47.667639    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:47.668283    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:47.668291    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:47.668297    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:47.668301    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:47.669950    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:48.164538    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:48.164559    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:48.164581    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:48.164603    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:48.168530    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:48.169233    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:48.169241    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:48.169248    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:48.169251    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:48.171274    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:48.663780    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:48.663804    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:48.663815    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:48.663821    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:48.667278    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:48.667837    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:48.667845    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:48.667851    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:48.667856    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:48.669518    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:49.165064    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:49.165087    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:49.165098    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:49.165104    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:49.168508    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:49.169206    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:49.169217    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:49.169225    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:49.169230    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:49.171198    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:49.171795    3848 pod_ready.go:103] pod "kube-apiserver-ha-138000-m02" in "kube-system" namespace has status "Ready":"False"
	I0815 16:30:49.663424    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:49.663448    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:49.663459    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:49.663467    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:49.667225    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:49.667697    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:49.667705    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:49.667711    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:49.667714    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:49.669376    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:50.164125    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:50.164149    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:50.164161    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:50.164166    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:50.167285    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:50.167810    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:50.167817    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:50.167823    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:50.167827    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:50.171799    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:50.663500    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:50.663525    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:50.663537    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:50.663543    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:50.667177    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:50.667713    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:50.667720    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:50.667726    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:50.667730    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:50.669352    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:51.164194    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:51.164219    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:51.164237    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:51.164244    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:51.167593    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:51.168246    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:51.168257    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:51.168264    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:51.168270    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:51.170524    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:51.664614    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:51.664638    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:51.664657    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:51.664665    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:51.668046    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:51.668566    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:51.668577    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:51.668585    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:51.668607    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:51.671534    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:51.671914    3848 pod_ready.go:103] pod "kube-apiserver-ha-138000-m02" in "kube-system" namespace has status "Ready":"False"
	I0815 16:30:52.164065    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:30:52.164089    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:52.164101    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:52.164110    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:52.167433    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:52.167935    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:30:52.167943    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:52.167948    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:52.167952    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:52.169540    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:52.169859    3848 pod_ready.go:93] pod "kube-apiserver-ha-138000-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 16:30:52.169869    3848 pod_ready.go:82] duration metric: took 11.507082407s for pod "kube-apiserver-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:30:52.169876    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:30:52.169910    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m03
	I0815 16:30:52.169915    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:52.169920    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:52.169923    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:52.171715    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:52.172141    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:30:52.172148    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:52.172154    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:52.172158    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:52.173532    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:52.173854    3848 pod_ready.go:93] pod "kube-apiserver-ha-138000-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 16:30:52.173863    3848 pod_ready.go:82] duration metric: took 3.981675ms for pod "kube-apiserver-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:30:52.173872    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:30:52.173900    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:30:52.173905    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:52.173911    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:52.173915    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:52.175518    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:52.175919    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:52.175926    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:52.175932    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:52.175936    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:52.177444    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:52.675197    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:30:52.675270    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:52.675284    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:52.675316    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:52.678186    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:52.678703    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:52.678711    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:52.678716    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:52.678719    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:52.680216    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:53.174971    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:30:53.174985    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:53.174994    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:53.175001    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:53.177452    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:53.177896    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:53.177903    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:53.177909    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:53.177912    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:53.179480    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:53.674788    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:30:53.674799    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:53.674806    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:53.674809    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:53.676873    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:53.677297    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:53.677305    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:53.677311    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:53.677315    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:53.678908    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:54.175897    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:30:54.175920    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:54.175937    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:54.175942    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:54.180021    3848 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 16:30:54.180479    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:54.180486    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:54.180492    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:54.180495    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:54.182351    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:54.182698    3848 pod_ready.go:103] pod "kube-controller-manager-ha-138000" in "kube-system" namespace has status "Ready":"False"
	I0815 16:30:54.674099    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:30:54.674113    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:54.674122    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:54.674126    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:54.676508    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:54.676959    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:54.676967    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:54.676973    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:54.676977    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:54.678531    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:55.174102    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:30:55.174117    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:55.174124    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:55.174129    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:55.176616    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:55.176978    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:55.176985    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:55.176991    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:55.176995    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:55.178804    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:55.675041    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:30:55.675073    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:55.675080    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:55.675083    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:55.677155    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:55.677606    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:55.677614    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:55.677620    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:55.677623    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:55.679257    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:56.174332    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:30:56.174347    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:56.174355    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:56.174360    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:56.176768    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:56.177182    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:56.177189    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:56.177194    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:56.177199    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:56.178739    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:56.674623    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:30:56.674644    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:56.674656    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:56.674663    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:56.678017    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:56.678729    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:56.678740    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:56.678748    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:56.678753    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:56.680396    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:56.680664    3848 pod_ready.go:103] pod "kube-controller-manager-ha-138000" in "kube-system" namespace has status "Ready":"False"
	I0815 16:30:57.174239    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:30:57.174259    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:57.174270    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:57.174276    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:57.176913    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:30:57.177317    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:57.177325    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:57.177330    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:57.177333    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:57.179089    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:57.674639    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:30:57.674650    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:57.674656    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:57.674660    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:57.676502    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:57.676984    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:57.676992    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:57.676997    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:57.677001    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:57.678477    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:58.174097    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:30:58.174117    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:58.174128    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:58.174136    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:58.177182    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:58.177563    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:58.177571    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:58.177575    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:58.177579    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:58.179304    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:58.675031    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:30:58.675045    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:58.675051    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:58.675055    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:58.680738    3848 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 16:30:58.682155    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:58.682163    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:58.682168    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:58.682171    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:58.686617    3848 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 16:30:58.686985    3848 pod_ready.go:103] pod "kube-controller-manager-ha-138000" in "kube-system" namespace has status "Ready":"False"
	I0815 16:30:59.174980    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:30:59.175006    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:59.175018    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:59.175023    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:59.178731    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:30:59.179314    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:59.179322    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:59.179328    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:59.179332    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:59.181206    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:59.674657    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:30:59.674670    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:59.674676    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:59.674679    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:59.676675    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:30:59.677055    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:30:59.677062    3848 round_trippers.go:469] Request Headers:
	I0815 16:30:59.677069    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:30:59.677074    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:30:59.679271    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:00.174152    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:00.174175    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:00.174187    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:00.174194    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:00.177768    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:00.178234    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:00.178241    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:00.178247    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:00.178251    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:00.179906    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:00.675229    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:00.675240    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:00.675246    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:00.675250    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:00.677503    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:00.677966    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:00.677974    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:00.677979    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:00.677983    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:00.681462    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:01.174237    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:01.174258    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:01.174271    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:01.174278    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:01.177221    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:01.177958    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:01.177967    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:01.177973    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:01.177987    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:01.179870    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:01.180167    3848 pod_ready.go:103] pod "kube-controller-manager-ha-138000" in "kube-system" namespace has status "Ready":"False"
	I0815 16:31:01.674059    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:01.674071    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:01.674078    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:01.674082    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:01.678596    3848 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 16:31:01.679166    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:01.679175    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:01.679183    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:01.679203    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:01.681866    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:02.174721    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:02.174744    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:02.174757    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:02.174765    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:02.177936    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:02.178578    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:02.178585    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:02.178590    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:02.178593    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:02.180199    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:02.674480    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:02.674492    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:02.674498    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:02.674501    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:02.676574    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:02.677121    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:02.677129    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:02.677135    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:02.677138    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:02.678870    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:03.174993    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:03.175017    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:03.175028    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:03.175034    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:03.178103    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:03.178765    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:03.178773    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:03.178780    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:03.178783    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:03.180384    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:03.180717    3848 pod_ready.go:103] pod "kube-controller-manager-ha-138000" in "kube-system" namespace has status "Ready":"False"
	I0815 16:31:03.675885    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:03.675928    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:03.675935    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:03.675938    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:03.681610    3848 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 16:31:03.682165    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:03.682172    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:03.682178    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:03.682187    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:03.685681    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:04.173973    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:04.173985    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:04.173993    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:04.173996    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:04.176170    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:04.176622    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:04.176629    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:04.176635    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:04.176638    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:04.178918    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:04.674029    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:04.674041    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:04.674047    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:04.674051    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:04.676085    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:04.676616    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:04.676624    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:04.676629    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:04.676633    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:04.678653    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:05.174670    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:05.174682    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:05.174692    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:05.174696    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:05.176894    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:05.177444    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:05.177452    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:05.177458    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:05.177462    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:05.179988    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:05.673967    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:05.673984    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:05.673991    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:05.674005    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:05.676133    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:05.676616    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:05.676623    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:05.676629    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:05.676632    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:05.678220    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:05.678588    3848 pod_ready.go:103] pod "kube-controller-manager-ha-138000" in "kube-system" namespace has status "Ready":"False"
	I0815 16:31:06.174028    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:06.174040    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:06.174046    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:06.174049    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:06.176193    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:06.176556    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:06.176564    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:06.176570    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:06.176574    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:06.178240    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:06.674003    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:06.674018    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:06.674028    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:06.674032    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:06.676638    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:06.677110    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:06.677118    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:06.677124    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:06.677127    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:06.680025    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:07.175462    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:07.175477    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:07.175485    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:07.175489    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:07.178337    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:07.178886    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:07.178895    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:07.178900    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:07.178904    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:07.181117    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:07.674103    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:07.674115    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:07.674121    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:07.674125    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:07.676375    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:07.676766    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:07.676774    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:07.676780    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:07.676783    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:07.678622    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:07.678897    3848 pod_ready.go:103] pod "kube-controller-manager-ha-138000" in "kube-system" namespace has status "Ready":"False"
	I0815 16:31:08.174128    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:08.174151    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:08.174166    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:08.174203    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:08.177482    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:08.177896    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:08.177904    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:08.177909    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:08.177914    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:08.179348    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:08.674105    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:08.674132    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:08.674180    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:08.674191    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:08.677562    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:08.677981    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:08.677989    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:08.677994    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:08.677997    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:08.679564    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:09.174687    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:09.174712    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:09.174723    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:09.174728    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:09.177711    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:09.178141    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:09.178149    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:09.178155    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:09.178160    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:09.179715    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:09.675793    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:09.675810    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:09.675860    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:09.675867    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:09.681370    3848 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 16:31:09.681707    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:09.681714    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:09.681720    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:09.681724    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:09.683407    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:09.683668    3848 pod_ready.go:103] pod "kube-controller-manager-ha-138000" in "kube-system" namespace has status "Ready":"False"
	I0815 16:31:10.174082    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:10.174096    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:10.174104    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:10.174111    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:10.176432    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:10.176901    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:10.176909    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:10.176916    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:10.176919    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:10.178547    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:10.674143    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:10.674158    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:10.674166    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:10.674171    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:10.676827    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:10.677366    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:10.677374    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:10.677379    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:10.677398    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:10.679369    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:11.174015    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:11.174031    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:11.174039    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:11.174043    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:11.176194    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:11.176646    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:11.176655    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:11.176661    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:11.176664    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:11.178182    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:11.674088    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:11.674100    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:11.674107    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:11.674111    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:11.676722    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:11.677179    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:11.677186    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:11.677192    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:11.677197    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:11.679318    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:12.173967    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:12.173978    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:12.173983    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:12.173986    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:12.176395    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:12.176784    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:12.176792    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:12.176797    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:12.176799    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:12.178613    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:12.178965    3848 pod_ready.go:103] pod "kube-controller-manager-ha-138000" in "kube-system" namespace has status "Ready":"False"
	I0815 16:31:12.674752    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:12.674764    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:12.674771    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:12.674774    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:12.676796    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:12.677237    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:12.677244    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:12.677249    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:12.677254    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:12.678824    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:13.174235    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:13.174257    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:13.174269    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:13.174275    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:13.177507    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:13.177937    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:13.177945    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:13.177950    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:13.177958    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:13.179998    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:13.674842    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:13.674865    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:13.674920    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:13.674927    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:13.677347    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:13.677743    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:13.677750    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:13.677756    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:13.677760    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:13.679598    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:14.174511    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:14.174531    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:14.174543    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:14.174548    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:14.177242    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:14.177787    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:14.177794    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:14.177799    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:14.177804    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:14.179505    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:14.179846    3848 pod_ready.go:103] pod "kube-controller-manager-ha-138000" in "kube-system" namespace has status "Ready":"False"
	I0815 16:31:14.674978    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:14.674991    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:14.675000    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:14.675005    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:14.677126    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:14.677577    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:14.677584    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:14.677589    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:14.677592    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:14.679150    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:15.174111    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:15.174190    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:15.174206    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:15.174214    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:15.178180    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:15.178702    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:15.178709    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:15.178716    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:15.178720    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:15.180563    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:15.674161    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:15.674175    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:15.674181    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:15.674184    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:15.676320    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:15.676809    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:15.676817    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:15.676822    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:15.676826    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:15.678731    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:15.679179    3848 pod_ready.go:93] pod "kube-controller-manager-ha-138000" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:15.679188    3848 pod_ready.go:82] duration metric: took 23.505390371s for pod "kube-controller-manager-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:15.679194    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:15.679234    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000-m02
	I0815 16:31:15.679239    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:15.679244    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:15.679249    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:15.680973    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:15.681373    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:31:15.681379    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:15.681385    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:15.681389    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:15.683105    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:15.683478    3848 pod_ready.go:93] pod "kube-controller-manager-ha-138000-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:15.683487    3848 pod_ready.go:82] duration metric: took 4.286435ms for pod "kube-controller-manager-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:15.683493    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:15.683528    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000-m03
	I0815 16:31:15.683532    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:15.683538    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:15.683543    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:15.685040    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:15.685461    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:15.685469    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:15.685474    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:15.685478    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:15.687218    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:15.687628    3848 pod_ready.go:93] pod "kube-controller-manager-ha-138000-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:15.687636    3848 pod_ready.go:82] duration metric: took 4.137303ms for pod "kube-controller-manager-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:15.687642    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cznkn" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:15.687674    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cznkn
	I0815 16:31:15.687679    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:15.687685    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:15.687690    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:15.689397    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:15.689764    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:15.689771    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:15.689776    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:15.689787    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:15.691449    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:15.691750    3848 pod_ready.go:93] pod "kube-proxy-cznkn" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:15.691759    3848 pod_ready.go:82] duration metric: took 4.111581ms for pod "kube-proxy-cznkn" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:15.691765    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kxghx" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:15.691804    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kxghx
	I0815 16:31:15.691809    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:15.691815    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:15.691819    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:15.693452    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:15.693908    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:15.693915    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:15.693921    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:15.693924    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:15.695674    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:15.695946    3848 pod_ready.go:93] pod "kube-proxy-kxghx" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:15.695955    3848 pod_ready.go:82] duration metric: took 4.185821ms for pod "kube-proxy-kxghx" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:15.695961    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qpth7" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:15.875071    3848 request.go:632] Waited for 179.069493ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qpth7
	I0815 16:31:15.875187    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qpth7
	I0815 16:31:15.875199    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:15.875210    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:15.875216    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:15.877997    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:16.074238    3848 request.go:632] Waited for 195.764515ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m04
	I0815 16:31:16.074336    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m04
	I0815 16:31:16.074348    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:16.074360    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:16.074366    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:16.076828    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:16.077164    3848 pod_ready.go:93] pod "kube-proxy-qpth7" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:16.077173    3848 pod_ready.go:82] duration metric: took 381.20933ms for pod "kube-proxy-qpth7" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:16.077180    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tf79g" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:16.275150    3848 request.go:632] Waited for 197.922377ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tf79g
	I0815 16:31:16.275315    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tf79g
	I0815 16:31:16.275333    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:16.275348    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:16.275355    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:16.279230    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:16.474637    3848 request.go:632] Waited for 194.734989ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:31:16.474686    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:31:16.474694    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:16.474748    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:16.474760    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:16.477402    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:16.477913    3848 pod_ready.go:93] pod "kube-proxy-tf79g" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:16.477922    3848 pod_ready.go:82] duration metric: took 400.738709ms for pod "kube-proxy-tf79g" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:16.477928    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:16.674642    3848 request.go:632] Waited for 196.671207ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000
	I0815 16:31:16.674730    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000
	I0815 16:31:16.674740    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:16.674751    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:16.674791    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:16.677902    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:16.874216    3848 request.go:632] Waited for 195.903155ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:16.874296    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:16.874307    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:16.874318    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:16.874325    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:16.877076    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:16.877354    3848 pod_ready.go:93] pod "kube-scheduler-ha-138000" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:16.877362    3848 pod_ready.go:82] duration metric: took 399.431009ms for pod "kube-scheduler-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:16.877369    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:17.075600    3848 request.go:632] Waited for 198.191772ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000-m02
	I0815 16:31:17.075685    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000-m02
	I0815 16:31:17.075692    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:17.075697    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:17.075701    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:17.077601    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:17.275453    3848 request.go:632] Waited for 196.87369ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:31:17.275508    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:31:17.275516    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:17.275528    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:17.275536    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:17.278217    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:17.278748    3848 pod_ready.go:93] pod "kube-scheduler-ha-138000-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:17.278761    3848 pod_ready.go:82] duration metric: took 401.387065ms for pod "kube-scheduler-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:17.278778    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:17.474217    3848 request.go:632] Waited for 195.389302ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000-m03
	I0815 16:31:17.474330    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000-m03
	I0815 16:31:17.474342    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:17.474353    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:17.474361    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:17.477689    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:17.675623    3848 request.go:632] Waited for 197.469909ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:17.675688    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:17.675697    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:17.675705    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:17.675712    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:17.677994    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:17.678325    3848 pod_ready.go:93] pod "kube-scheduler-ha-138000-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:17.678335    3848 pod_ready.go:82] duration metric: took 399.551961ms for pod "kube-scheduler-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:17.678343    3848 pod_ready.go:39] duration metric: took 37.624501402s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 16:31:17.678361    3848 api_server.go:52] waiting for apiserver process to appear ...
	I0815 16:31:17.678422    3848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 16:31:17.692897    3848 api_server.go:72] duration metric: took 46.249064527s to wait for apiserver process to appear ...
	I0815 16:31:17.692911    3848 api_server.go:88] waiting for apiserver healthz status ...
	I0815 16:31:17.692928    3848 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0815 16:31:17.695957    3848 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0815 16:31:17.695990    3848 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0815 16:31:17.695994    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:17.696000    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:17.696004    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:17.696581    3848 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0815 16:31:17.696664    3848 api_server.go:141] control plane version: v1.31.0
	I0815 16:31:17.696676    3848 api_server.go:131] duration metric: took 3.760735ms to wait for apiserver health ...
	I0815 16:31:17.696684    3848 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 16:31:17.874475    3848 request.go:632] Waited for 177.745811ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0815 16:31:17.874542    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0815 16:31:17.874551    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:17.874608    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:17.874617    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:17.879453    3848 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 16:31:17.884757    3848 system_pods.go:59] 26 kube-system pods found
	I0815 16:31:17.884772    3848 system_pods.go:61] "coredns-6f6b679f8f-dmgt5" [47d73953-ec2c-4f17-b2b8-d6a9b5e5a316] Running
	I0815 16:31:17.884778    3848 system_pods.go:61] "coredns-6f6b679f8f-zc8jj" [b4a9df39-b09d-4bc3-97f6-b3176ff8e842] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 16:31:17.884783    3848 system_pods.go:61] "etcd-ha-138000" [c2e7e508-a157-4581-a446-2f48e51c0c16] Running
	I0815 16:31:17.884787    3848 system_pods.go:61] "etcd-ha-138000-m02" [4f371d3f-821b-4eae-ab52-5b75d90acd67] Running
	I0815 16:31:17.884791    3848 system_pods.go:61] "etcd-ha-138000-m03" [ebdd90b3-c3b2-44c2-a411-4f6afa67a455] Running
	I0815 16:31:17.884793    3848 system_pods.go:61] "kindnet-77dc6" [24bfc069-9446-43a7-aa61-308c1a62a20e] Running
	I0815 16:31:17.884796    3848 system_pods.go:61] "kindnet-dsvxt" [7645852b-8535-4cb4-8324-15c9b55176e3] Running
	I0815 16:31:17.884798    3848 system_pods.go:61] "kindnet-m887r" [ba31865b-c712-47a8-9fd8-06420270ac8b] Running
	I0815 16:31:17.884801    3848 system_pods.go:61] "kindnet-z6mnx" [fc6211a0-df99-4350-90ba-a18f74bd1bfc] Running
	I0815 16:31:17.884804    3848 system_pods.go:61] "kube-apiserver-ha-138000" [bbc684f7-d8e2-44f2-987a-df9d05cf54fc] Running
	I0815 16:31:17.884806    3848 system_pods.go:61] "kube-apiserver-ha-138000-m02" [c30e129b-f475-4131-a25e-7eeecb39cbea] Running
	I0815 16:31:17.884809    3848 system_pods.go:61] "kube-apiserver-ha-138000-m03" [90304f02-10f2-446d-b731-674df5602401] Running
	I0815 16:31:17.884811    3848 system_pods.go:61] "kube-controller-manager-ha-138000" [1d4148d1-9798-4662-91de-d9a7dae634e2] Running
	I0815 16:31:17.884814    3848 system_pods.go:61] "kube-controller-manager-ha-138000-m02" [ad693dae-cb0a-46ca-955b-33a9465d911a] Running
	I0815 16:31:17.884816    3848 system_pods.go:61] "kube-controller-manager-ha-138000-m03" [11aa509a-dade-4b66-b157-4d4cccb7f349] Running
	I0815 16:31:17.884819    3848 system_pods.go:61] "kube-proxy-cznkn" [61cd1a9a-80fb-4d0e-a4dd-f5ed2d6ddc0f] Running
	I0815 16:31:17.884821    3848 system_pods.go:61] "kube-proxy-kxghx" [27f61dcc-2812-4038-af2a-aa9f46033869] Running
	I0815 16:31:17.884823    3848 system_pods.go:61] "kube-proxy-qpth7" [a343f80b-0fe9-4c88-9782-5fbf9a6170d1] Running
	I0815 16:31:17.884826    3848 system_pods.go:61] "kube-proxy-tf79g" [ef8a2aeb-55c4-436e-a306-da8b93c9707f] Running
	I0815 16:31:17.884830    3848 system_pods.go:61] "kube-scheduler-ha-138000" [ee1d3eb0-596d-4bf0-bed4-dbd38b286e38] Running
	I0815 16:31:17.884832    3848 system_pods.go:61] "kube-scheduler-ha-138000-m02" [1cc29309-49ad-427e-862a-758cc5711eef] Running
	I0815 16:31:17.884835    3848 system_pods.go:61] "kube-scheduler-ha-138000-m03" [2e3efb3c-6903-4306-adec-d4a47b3a56bd] Running
	I0815 16:31:17.884837    3848 system_pods.go:61] "kube-vip-ha-138000" [1b8c66e5-ffaa-4d4b-a220-8eb664b79d91] Running
	I0815 16:31:17.884839    3848 system_pods.go:61] "kube-vip-ha-138000-m02" [a0fe2ba6-1ace-456f-ab2e-15c43303dcad] Running
	I0815 16:31:17.884841    3848 system_pods.go:61] "kube-vip-ha-138000-m03" [44fe2bd5-bd48-4f86-b38a-7e4b3554174c] Running
	I0815 16:31:17.884844    3848 system_pods.go:61] "storage-provisioner" [35f3a9d7-cb68-4b10-82a2-1bd72d8d1aa6] Running
	I0815 16:31:17.884847    3848 system_pods.go:74] duration metric: took 188.159351ms to wait for pod list to return data ...
	I0815 16:31:17.884852    3848 default_sa.go:34] waiting for default service account to be created ...
	I0815 16:31:18.074641    3848 request.go:632] Waited for 189.738485ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0815 16:31:18.074728    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0815 16:31:18.074738    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:18.074749    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:18.074756    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:18.078635    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:18.078759    3848 default_sa.go:45] found service account: "default"
	I0815 16:31:18.078768    3848 default_sa.go:55] duration metric: took 193.912663ms for default service account to be created ...
	I0815 16:31:18.078774    3848 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 16:31:18.274230    3848 request.go:632] Waited for 195.413402ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0815 16:31:18.274340    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0815 16:31:18.274351    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:18.274361    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:18.274369    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:18.279297    3848 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 16:31:18.284504    3848 system_pods.go:86] 26 kube-system pods found
	I0815 16:31:18.284515    3848 system_pods.go:89] "coredns-6f6b679f8f-dmgt5" [47d73953-ec2c-4f17-b2b8-d6a9b5e5a316] Running
	I0815 16:31:18.284521    3848 system_pods.go:89] "coredns-6f6b679f8f-zc8jj" [b4a9df39-b09d-4bc3-97f6-b3176ff8e842] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 16:31:18.284525    3848 system_pods.go:89] "etcd-ha-138000" [c2e7e508-a157-4581-a446-2f48e51c0c16] Running
	I0815 16:31:18.284530    3848 system_pods.go:89] "etcd-ha-138000-m02" [4f371d3f-821b-4eae-ab52-5b75d90acd67] Running
	I0815 16:31:18.284534    3848 system_pods.go:89] "etcd-ha-138000-m03" [ebdd90b3-c3b2-44c2-a411-4f6afa67a455] Running
	I0815 16:31:18.284537    3848 system_pods.go:89] "kindnet-77dc6" [24bfc069-9446-43a7-aa61-308c1a62a20e] Running
	I0815 16:31:18.284540    3848 system_pods.go:89] "kindnet-dsvxt" [7645852b-8535-4cb4-8324-15c9b55176e3] Running
	I0815 16:31:18.284543    3848 system_pods.go:89] "kindnet-m887r" [ba31865b-c712-47a8-9fd8-06420270ac8b] Running
	I0815 16:31:18.284545    3848 system_pods.go:89] "kindnet-z6mnx" [fc6211a0-df99-4350-90ba-a18f74bd1bfc] Running
	I0815 16:31:18.284550    3848 system_pods.go:89] "kube-apiserver-ha-138000" [bbc684f7-d8e2-44f2-987a-df9d05cf54fc] Running
	I0815 16:31:18.284554    3848 system_pods.go:89] "kube-apiserver-ha-138000-m02" [c30e129b-f475-4131-a25e-7eeecb39cbea] Running
	I0815 16:31:18.284557    3848 system_pods.go:89] "kube-apiserver-ha-138000-m03" [90304f02-10f2-446d-b731-674df5602401] Running
	I0815 16:31:18.284561    3848 system_pods.go:89] "kube-controller-manager-ha-138000" [1d4148d1-9798-4662-91de-d9a7dae634e2] Running
	I0815 16:31:18.284564    3848 system_pods.go:89] "kube-controller-manager-ha-138000-m02" [ad693dae-cb0a-46ca-955b-33a9465d911a] Running
	I0815 16:31:18.284567    3848 system_pods.go:89] "kube-controller-manager-ha-138000-m03" [11aa509a-dade-4b66-b157-4d4cccb7f349] Running
	I0815 16:31:18.284570    3848 system_pods.go:89] "kube-proxy-cznkn" [61cd1a9a-80fb-4d0e-a4dd-f5ed2d6ddc0f] Running
	I0815 16:31:18.284572    3848 system_pods.go:89] "kube-proxy-kxghx" [27f61dcc-2812-4038-af2a-aa9f46033869] Running
	I0815 16:31:18.284575    3848 system_pods.go:89] "kube-proxy-qpth7" [a343f80b-0fe9-4c88-9782-5fbf9a6170d1] Running
	I0815 16:31:18.284579    3848 system_pods.go:89] "kube-proxy-tf79g" [ef8a2aeb-55c4-436e-a306-da8b93c9707f] Running
	I0815 16:31:18.284582    3848 system_pods.go:89] "kube-scheduler-ha-138000" [ee1d3eb0-596d-4bf0-bed4-dbd38b286e38] Running
	I0815 16:31:18.284586    3848 system_pods.go:89] "kube-scheduler-ha-138000-m02" [1cc29309-49ad-427e-862a-758cc5711eef] Running
	I0815 16:31:18.284588    3848 system_pods.go:89] "kube-scheduler-ha-138000-m03" [2e3efb3c-6903-4306-adec-d4a47b3a56bd] Running
	I0815 16:31:18.284591    3848 system_pods.go:89] "kube-vip-ha-138000" [1b8c66e5-ffaa-4d4b-a220-8eb664b79d91] Running
	I0815 16:31:18.284594    3848 system_pods.go:89] "kube-vip-ha-138000-m02" [a0fe2ba6-1ace-456f-ab2e-15c43303dcad] Running
	I0815 16:31:18.284596    3848 system_pods.go:89] "kube-vip-ha-138000-m03" [44fe2bd5-bd48-4f86-b38a-7e4b3554174c] Running
	I0815 16:31:18.284599    3848 system_pods.go:89] "storage-provisioner" [35f3a9d7-cb68-4b10-82a2-1bd72d8d1aa6] Running
	I0815 16:31:18.284603    3848 system_pods.go:126] duration metric: took 205.826361ms to wait for k8s-apps to be running ...
	I0815 16:31:18.284609    3848 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 16:31:18.284679    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 16:31:18.296708    3848 system_svc.go:56] duration metric: took 12.095446ms WaitForService to wait for kubelet
	I0815 16:31:18.296724    3848 kubeadm.go:582] duration metric: took 46.852894704s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 16:31:18.296736    3848 node_conditions.go:102] verifying NodePressure condition ...
	I0815 16:31:18.474267    3848 request.go:632] Waited for 177.483283ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0815 16:31:18.474322    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0815 16:31:18.474330    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:18.474371    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:18.474392    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:18.477388    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:18.478383    3848 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 16:31:18.478396    3848 node_conditions.go:123] node cpu capacity is 2
	I0815 16:31:18.478405    3848 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 16:31:18.478408    3848 node_conditions.go:123] node cpu capacity is 2
	I0815 16:31:18.478412    3848 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 16:31:18.478415    3848 node_conditions.go:123] node cpu capacity is 2
	I0815 16:31:18.478418    3848 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 16:31:18.478423    3848 node_conditions.go:123] node cpu capacity is 2
	I0815 16:31:18.478427    3848 node_conditions.go:105] duration metric: took 181.688465ms to run NodePressure ...
	I0815 16:31:18.478434    3848 start.go:241] waiting for startup goroutines ...
	I0815 16:31:18.478453    3848 start.go:255] writing updated cluster config ...
	I0815 16:31:18.501967    3848 out.go:201] 
	I0815 16:31:18.522062    3848 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:31:18.522177    3848 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/config.json ...
	I0815 16:31:18.560022    3848 out.go:177] * Starting "ha-138000-m03" control-plane node in "ha-138000" cluster
	I0815 16:31:18.618077    3848 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 16:31:18.618104    3848 cache.go:56] Caching tarball of preloaded images
	I0815 16:31:18.618293    3848 preload.go:172] Found /Users/jenkins/minikube-integration/19452-977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0815 16:31:18.618310    3848 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 16:31:18.618409    3848 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/config.json ...
	I0815 16:31:18.619051    3848 start.go:360] acquireMachinesLock for ha-138000-m03: {Name:mkac8034c8a60617271f2754a03dcaf74fc4fef7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 16:31:18.619147    3848 start.go:364] duration metric: took 77.203µs to acquireMachinesLock for "ha-138000-m03"
	I0815 16:31:18.619166    3848 start.go:96] Skipping create...Using existing machine configuration
	I0815 16:31:18.619174    3848 fix.go:54] fixHost starting: m03
	I0815 16:31:18.619485    3848 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:31:18.619510    3848 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:31:18.628416    3848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52280
	I0815 16:31:18.628739    3848 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:31:18.629076    3848 main.go:141] libmachine: Using API Version  1
	I0815 16:31:18.629087    3848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:31:18.629285    3848 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:31:18.629412    3848 main.go:141] libmachine: (ha-138000-m03) Calling .DriverName
	I0815 16:31:18.629506    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetState
	I0815 16:31:18.629587    3848 main.go:141] libmachine: (ha-138000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:31:18.629688    3848 main.go:141] libmachine: (ha-138000-m03) DBG | hyperkit pid from json: 3119
	I0815 16:31:18.630594    3848 main.go:141] libmachine: (ha-138000-m03) DBG | hyperkit pid 3119 missing from process table
	I0815 16:31:18.630635    3848 fix.go:112] recreateIfNeeded on ha-138000-m03: state=Stopped err=<nil>
	I0815 16:31:18.630646    3848 main.go:141] libmachine: (ha-138000-m03) Calling .DriverName
	W0815 16:31:18.630738    3848 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 16:31:18.653953    3848 out.go:177] * Restarting existing hyperkit VM for "ha-138000-m03" ...
	I0815 16:31:18.711722    3848 main.go:141] libmachine: (ha-138000-m03) Calling .Start
	I0815 16:31:18.712041    3848 main.go:141] libmachine: (ha-138000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:31:18.712160    3848 main.go:141] libmachine: (ha-138000-m03) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/hyperkit.pid
	I0815 16:31:18.713734    3848 main.go:141] libmachine: (ha-138000-m03) DBG | hyperkit pid 3119 missing from process table
	I0815 16:31:18.713751    3848 main.go:141] libmachine: (ha-138000-m03) DBG | pid 3119 is in state "Stopped"
	I0815 16:31:18.713774    3848 main.go:141] libmachine: (ha-138000-m03) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/hyperkit.pid...
	I0815 16:31:18.713958    3848 main.go:141] libmachine: (ha-138000-m03) DBG | Using UUID 4228381e-4618-4b8b-ac7c-129bf380703a
	I0815 16:31:18.742338    3848 main.go:141] libmachine: (ha-138000-m03) DBG | Generated MAC 9e:18:89:2a:2d:99
	I0815 16:31:18.742370    3848 main.go:141] libmachine: (ha-138000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000
	I0815 16:31:18.742565    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:18 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"4228381e-4618-4b8b-ac7c-129bf380703a", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00037f470)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0815 16:31:18.742609    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:18 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"4228381e-4618-4b8b-ac7c-129bf380703a", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00037f470)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0815 16:31:18.742699    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:18 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "4228381e-4618-4b8b-ac7c-129bf380703a", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/ha-138000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/tty,log=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/bzimage,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-13
8000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000"}
	I0815 16:31:18.742751    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:18 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 4228381e-4618-4b8b-ac7c-129bf380703a -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/ha-138000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/tty,log=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/bzimage,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 co
nsole=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000"
	I0815 16:31:18.742790    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:18 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0815 16:31:18.744551    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:18 DEBUG: hyperkit: Pid is 4186
	I0815 16:31:18.745071    3848 main.go:141] libmachine: (ha-138000-m03) DBG | Attempt 0
	I0815 16:31:18.745087    3848 main.go:141] libmachine: (ha-138000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:31:18.745163    3848 main.go:141] libmachine: (ha-138000-m03) DBG | hyperkit pid from json: 4186
	I0815 16:31:18.746856    3848 main.go:141] libmachine: (ha-138000-m03) DBG | Searching for 9e:18:89:2a:2d:99 in /var/db/dhcpd_leases ...
	I0815 16:31:18.746937    3848 main.go:141] libmachine: (ha-138000-m03) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0815 16:31:18.746955    3848 main.go:141] libmachine: (ha-138000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:31:18.746980    3848 main.go:141] libmachine: (ha-138000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:31:18.746991    3848 main.go:141] libmachine: (ha-138000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be8e15}
	I0815 16:31:18.747032    3848 main.go:141] libmachine: (ha-138000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfdedc}
	I0815 16:31:18.747039    3848 main.go:141] libmachine: (ha-138000-m03) DBG | Found match: 9e:18:89:2a:2d:99
	I0815 16:31:18.747040    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetConfigRaw
	I0815 16:31:18.747045    3848 main.go:141] libmachine: (ha-138000-m03) DBG | IP: 192.169.0.7
	I0815 16:31:18.747774    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetIP
	I0815 16:31:18.747963    3848 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/config.json ...
	I0815 16:31:18.748524    3848 machine.go:93] provisionDockerMachine start ...
	I0815 16:31:18.748538    3848 main.go:141] libmachine: (ha-138000-m03) Calling .DriverName
	I0815 16:31:18.748670    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHHostname
	I0815 16:31:18.748765    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHPort
	I0815 16:31:18.748845    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:18.748950    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:18.749050    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHUsername
	I0815 16:31:18.749179    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:31:18.749325    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0815 16:31:18.749333    3848 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 16:31:18.752657    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:18 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0815 16:31:18.760833    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:18 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0815 16:31:18.761721    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0815 16:31:18.761738    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0815 16:31:18.761746    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0815 16:31:18.761755    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0815 16:31:19.145894    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:19 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0815 16:31:19.145910    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:19 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0815 16:31:19.260828    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0815 16:31:19.260843    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0815 16:31:19.260851    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0815 16:31:19.260862    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0815 16:31:19.261711    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:19 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0815 16:31:19.261721    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:19 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0815 16:31:24.888063    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:24 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0815 16:31:24.888137    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:24 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0815 16:31:24.888149    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:24 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0815 16:31:24.911372    3848 main.go:141] libmachine: (ha-138000-m03) DBG | 2024/08/15 16:31:24 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0815 16:31:29.819902    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 16:31:29.819917    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetMachineName
	I0815 16:31:29.820052    3848 buildroot.go:166] provisioning hostname "ha-138000-m03"
	I0815 16:31:29.820067    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetMachineName
	I0815 16:31:29.820174    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHHostname
	I0815 16:31:29.820268    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHPort
	I0815 16:31:29.820353    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:29.820429    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:29.820504    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHUsername
	I0815 16:31:29.820626    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:31:29.820777    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0815 16:31:29.820785    3848 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-138000-m03 && echo "ha-138000-m03" | sudo tee /etc/hostname
	I0815 16:31:29.898224    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-138000-m03
	
	I0815 16:31:29.898247    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHHostname
	I0815 16:31:29.898395    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHPort
	I0815 16:31:29.898481    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:29.898567    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:29.898654    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHUsername
	I0815 16:31:29.898789    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:31:29.898974    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0815 16:31:29.898986    3848 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-138000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-138000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-138000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 16:31:29.968919    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 16:31:29.968938    3848 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19452-977/.minikube CaCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19452-977/.minikube}
	I0815 16:31:29.968947    3848 buildroot.go:174] setting up certificates
	I0815 16:31:29.968952    3848 provision.go:84] configureAuth start
	I0815 16:31:29.968959    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetMachineName
	I0815 16:31:29.969088    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetIP
	I0815 16:31:29.969172    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHHostname
	I0815 16:31:29.969251    3848 provision.go:143] copyHostCerts
	I0815 16:31:29.969278    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem
	I0815 16:31:29.969343    3848 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem, removing ...
	I0815 16:31:29.969348    3848 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem
	I0815 16:31:29.969482    3848 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem (1082 bytes)
	I0815 16:31:29.969678    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem
	I0815 16:31:29.969716    3848 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem, removing ...
	I0815 16:31:29.969721    3848 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem
	I0815 16:31:29.969830    3848 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem (1123 bytes)
	I0815 16:31:29.969984    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem
	I0815 16:31:29.970023    3848 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem, removing ...
	I0815 16:31:29.970028    3848 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem
	I0815 16:31:29.970129    3848 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem (1679 bytes)
	I0815 16:31:29.970281    3848 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem org=jenkins.ha-138000-m03 san=[127.0.0.1 192.169.0.7 ha-138000-m03 localhost minikube]
	I0815 16:31:30.063220    3848 provision.go:177] copyRemoteCerts
	I0815 16:31:30.063270    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 16:31:30.063286    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHHostname
	I0815 16:31:30.063426    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHPort
	I0815 16:31:30.063510    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:30.063603    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHUsername
	I0815 16:31:30.063685    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/id_rsa Username:docker}
	I0815 16:31:30.101783    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0815 16:31:30.101861    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 16:31:30.121792    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0815 16:31:30.121868    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 16:31:30.141970    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0815 16:31:30.142077    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0815 16:31:30.161960    3848 provision.go:87] duration metric: took 192.993235ms to configureAuth
	I0815 16:31:30.161983    3848 buildroot.go:189] setting minikube options for container-runtime
	I0815 16:31:30.162167    3848 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:31:30.162199    3848 main.go:141] libmachine: (ha-138000-m03) Calling .DriverName
	I0815 16:31:30.162337    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHHostname
	I0815 16:31:30.162430    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHPort
	I0815 16:31:30.162521    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:30.162598    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:30.162675    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHUsername
	I0815 16:31:30.162784    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:31:30.162913    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0815 16:31:30.162921    3848 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0815 16:31:30.228685    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0815 16:31:30.228697    3848 buildroot.go:70] root file system type: tmpfs
	I0815 16:31:30.228781    3848 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0815 16:31:30.228793    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHHostname
	I0815 16:31:30.228929    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHPort
	I0815 16:31:30.229020    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:30.229108    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:30.229195    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHUsername
	I0815 16:31:30.229313    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:31:30.229444    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0815 16:31:30.229494    3848 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0815 16:31:30.305200    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0815 16:31:30.305217    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHHostname
	I0815 16:31:30.305352    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHPort
	I0815 16:31:30.305448    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:30.305543    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:30.305648    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHUsername
	I0815 16:31:30.305802    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:31:30.305948    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0815 16:31:30.305961    3848 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0815 16:31:31.969522    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0815 16:31:31.969536    3848 machine.go:96] duration metric: took 13.221047415s to provisionDockerMachine
	I0815 16:31:31.969548    3848 start.go:293] postStartSetup for "ha-138000-m03" (driver="hyperkit")
	I0815 16:31:31.969555    3848 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 16:31:31.969566    3848 main.go:141] libmachine: (ha-138000-m03) Calling .DriverName
	I0815 16:31:31.969757    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 16:31:31.969772    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHHostname
	I0815 16:31:31.969871    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHPort
	I0815 16:31:31.969976    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:31.970054    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHUsername
	I0815 16:31:31.970139    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/id_rsa Username:docker}
	I0815 16:31:32.013928    3848 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 16:31:32.017159    3848 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 16:31:32.017170    3848 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19452-977/.minikube/addons for local assets ...
	I0815 16:31:32.017274    3848 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19452-977/.minikube/files for local assets ...
	I0815 16:31:32.017462    3848 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> 14982.pem in /etc/ssl/certs
	I0815 16:31:32.017468    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> /etc/ssl/certs/14982.pem
	I0815 16:31:32.017677    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 16:31:32.029028    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem --> /etc/ssl/certs/14982.pem (1708 bytes)
	I0815 16:31:32.059130    3848 start.go:296] duration metric: took 89.573356ms for postStartSetup
	I0815 16:31:32.059162    3848 main.go:141] libmachine: (ha-138000-m03) Calling .DriverName
	I0815 16:31:32.059341    3848 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0815 16:31:32.059355    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHHostname
	I0815 16:31:32.059449    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHPort
	I0815 16:31:32.059534    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:32.059624    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHUsername
	I0815 16:31:32.059708    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/id_rsa Username:docker}
	I0815 16:31:32.098694    3848 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0815 16:31:32.098758    3848 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0815 16:31:32.152993    3848 fix.go:56] duration metric: took 13.533862474s for fixHost
	I0815 16:31:32.153017    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHHostname
	I0815 16:31:32.153168    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHPort
	I0815 16:31:32.153266    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:32.153360    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:32.153453    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHUsername
	I0815 16:31:32.153579    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:31:32.153719    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0815 16:31:32.153727    3848 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 16:31:32.220010    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723764692.474550074
	
	I0815 16:31:32.220026    3848 fix.go:216] guest clock: 1723764692.474550074
	I0815 16:31:32.220031    3848 fix.go:229] Guest: 2024-08-15 16:31:32.474550074 -0700 PDT Remote: 2024-08-15 16:31:32.153007 -0700 PDT m=+98.155027601 (delta=321.543074ms)
	I0815 16:31:32.220043    3848 fix.go:200] guest clock delta is within tolerance: 321.543074ms
	I0815 16:31:32.220047    3848 start.go:83] releasing machines lock for "ha-138000-m03", held for 13.600937599s
	I0815 16:31:32.220063    3848 main.go:141] libmachine: (ha-138000-m03) Calling .DriverName
	I0815 16:31:32.220193    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetIP
	I0815 16:31:32.242484    3848 out.go:177] * Found network options:
	I0815 16:31:32.262540    3848 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W0815 16:31:32.284750    3848 proxy.go:119] fail to check proxy env: Error ip not in block
	W0815 16:31:32.284780    3848 proxy.go:119] fail to check proxy env: Error ip not in block
	I0815 16:31:32.284808    3848 main.go:141] libmachine: (ha-138000-m03) Calling .DriverName
	I0815 16:31:32.285357    3848 main.go:141] libmachine: (ha-138000-m03) Calling .DriverName
	I0815 16:31:32.285486    3848 main.go:141] libmachine: (ha-138000-m03) Calling .DriverName
	I0815 16:31:32.285580    3848 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 16:31:32.285610    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHHostname
	W0815 16:31:32.285635    3848 proxy.go:119] fail to check proxy env: Error ip not in block
	W0815 16:31:32.285649    3848 proxy.go:119] fail to check proxy env: Error ip not in block
	I0815 16:31:32.285725    3848 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0815 16:31:32.285743    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHHostname
	I0815 16:31:32.285746    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHPort
	I0815 16:31:32.285912    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHPort
	I0815 16:31:32.285930    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:32.286051    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:31:32.286078    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHUsername
	I0815 16:31:32.286176    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHUsername
	I0815 16:31:32.286220    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/id_rsa Username:docker}
	I0815 16:31:32.286297    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/id_rsa Username:docker}
	W0815 16:31:32.322271    3848 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 16:31:32.322331    3848 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 16:31:32.369504    3848 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 16:31:32.369521    3848 start.go:495] detecting cgroup driver to use...
	I0815 16:31:32.369607    3848 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 16:31:32.385397    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0815 16:31:32.393793    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0815 16:31:32.401893    3848 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0815 16:31:32.401954    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0815 16:31:32.410021    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 16:31:32.418144    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0815 16:31:32.426371    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 16:31:32.434583    3848 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 16:31:32.442902    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0815 16:31:32.451254    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0815 16:31:32.459565    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0815 16:31:32.467863    3848 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 16:31:32.475226    3848 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 16:31:32.482724    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:31:32.583602    3848 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0815 16:31:32.603710    3848 start.go:495] detecting cgroup driver to use...
	I0815 16:31:32.603796    3848 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0815 16:31:32.620091    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 16:31:32.633248    3848 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 16:31:32.652532    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 16:31:32.666138    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0815 16:31:32.676424    3848 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0815 16:31:32.697061    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0815 16:31:32.707503    3848 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 16:31:32.722896    3848 ssh_runner.go:195] Run: which cri-dockerd
	I0815 16:31:32.725902    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0815 16:31:32.733526    3848 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0815 16:31:32.747908    3848 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0815 16:31:32.853084    3848 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0815 16:31:32.953384    3848 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0815 16:31:32.953408    3848 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0815 16:31:32.968013    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:31:33.073760    3848 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0815 16:31:35.380632    3848 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.306859581s)
	I0815 16:31:35.380695    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0815 16:31:35.391776    3848 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0815 16:31:35.404750    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0815 16:31:35.414823    3848 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0815 16:31:35.508250    3848 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0815 16:31:35.605930    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:31:35.720643    3848 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0815 16:31:35.734388    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0815 16:31:35.745523    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:31:35.849768    3848 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0815 16:31:35.916223    3848 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0815 16:31:35.916311    3848 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0815 16:31:35.920652    3848 start.go:563] Will wait 60s for crictl version
	I0815 16:31:35.920712    3848 ssh_runner.go:195] Run: which crictl
	I0815 16:31:35.923687    3848 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 16:31:35.951143    3848 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0815 16:31:35.951216    3848 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0815 16:31:35.970702    3848 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0815 16:31:36.011114    3848 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0815 16:31:36.053083    3848 out.go:177]   - env NO_PROXY=192.169.0.5
	I0815 16:31:36.074064    3848 out.go:177]   - env NO_PROXY=192.169.0.5,192.169.0.6
	I0815 16:31:36.094992    3848 main.go:141] libmachine: (ha-138000-m03) Calling .GetIP
	I0815 16:31:36.095254    3848 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0815 16:31:36.098563    3848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 16:31:36.107924    3848 mustload.go:65] Loading cluster: ha-138000
	I0815 16:31:36.108121    3848 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:31:36.108349    3848 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:31:36.108371    3848 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:31:36.117631    3848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52302
	I0815 16:31:36.118004    3848 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:31:36.118362    3848 main.go:141] libmachine: Using API Version  1
	I0815 16:31:36.118373    3848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:31:36.118572    3848 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:31:36.118683    3848 main.go:141] libmachine: (ha-138000) Calling .GetState
	I0815 16:31:36.118769    3848 main.go:141] libmachine: (ha-138000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:31:36.118858    3848 main.go:141] libmachine: (ha-138000) DBG | hyperkit pid from json: 3862
	I0815 16:31:36.119807    3848 host.go:66] Checking if "ha-138000" exists ...
	I0815 16:31:36.120056    3848 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:31:36.120079    3848 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:31:36.128888    3848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52304
	I0815 16:31:36.129245    3848 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:31:36.129613    3848 main.go:141] libmachine: Using API Version  1
	I0815 16:31:36.129628    3848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:31:36.129838    3848 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:31:36.129960    3848 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:31:36.130061    3848 certs.go:68] Setting up /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000 for IP: 192.169.0.7
	I0815 16:31:36.130067    3848 certs.go:194] generating shared ca certs ...
	I0815 16:31:36.130076    3848 certs.go:226] acquiring lock for ca certs: {Name:mka1c88019f6064d2983ac988db71a67aeb65696 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:31:36.130237    3848 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.key
	I0815 16:31:36.130321    3848 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.key
	I0815 16:31:36.130330    3848 certs.go:256] generating profile certs ...
	I0815 16:31:36.130443    3848 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/client.key
	I0815 16:31:36.130530    3848 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.key.c7e1c29f
	I0815 16:31:36.130604    3848 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.key
	I0815 16:31:36.130617    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0815 16:31:36.130638    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0815 16:31:36.130658    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0815 16:31:36.130676    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0815 16:31:36.130694    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0815 16:31:36.130735    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0815 16:31:36.130766    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0815 16:31:36.130785    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0815 16:31:36.130871    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498.pem (1338 bytes)
	W0815 16:31:36.130920    3848 certs.go:480] ignoring /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498_empty.pem, impossibly tiny 0 bytes
	I0815 16:31:36.130928    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 16:31:36.130977    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem (1082 bytes)
	I0815 16:31:36.131019    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem (1123 bytes)
	I0815 16:31:36.131050    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem (1679 bytes)
	I0815 16:31:36.131116    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem (1708 bytes)
	I0815 16:31:36.131153    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498.pem -> /usr/share/ca-certificates/1498.pem
	I0815 16:31:36.131174    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> /usr/share/ca-certificates/14982.pem
	I0815 16:31:36.131191    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:31:36.131214    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:31:36.131305    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:31:36.131384    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:31:36.131503    3848 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:31:36.131582    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/id_rsa Username:docker}
	I0815 16:31:36.163135    3848 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0815 16:31:36.167195    3848 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0815 16:31:36.177598    3848 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0815 16:31:36.181380    3848 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0815 16:31:36.190596    3848 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0815 16:31:36.194001    3848 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0815 16:31:36.202689    3848 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0815 16:31:36.205906    3848 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0815 16:31:36.214386    3848 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0815 16:31:36.217472    3848 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0815 16:31:36.226235    3848 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0815 16:31:36.229561    3848 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0815 16:31:36.238534    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 16:31:36.259009    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 16:31:36.279081    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 16:31:36.299147    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 16:31:36.319142    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0815 16:31:36.339480    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 16:31:36.359157    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 16:31:36.379445    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 16:31:36.399731    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498.pem --> /usr/share/ca-certificates/1498.pem (1338 bytes)
	I0815 16:31:36.419506    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem --> /usr/share/ca-certificates/14982.pem (1708 bytes)
	I0815 16:31:36.439172    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 16:31:36.458742    3848 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0815 16:31:36.472323    3848 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0815 16:31:36.486349    3848 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0815 16:31:36.500064    3848 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0815 16:31:36.513680    3848 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0815 16:31:36.527778    3848 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0815 16:31:36.541967    3848 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0815 16:31:36.555903    3848 ssh_runner.go:195] Run: openssl version
	I0815 16:31:36.560554    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14982.pem && ln -fs /usr/share/ca-certificates/14982.pem /etc/ssl/certs/14982.pem"
	I0815 16:31:36.569772    3848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14982.pem
	I0815 16:31:36.573086    3848 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 23:14 /usr/share/ca-certificates/14982.pem
	I0815 16:31:36.573133    3848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14982.pem
	I0815 16:31:36.577434    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14982.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 16:31:36.585945    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 16:31:36.594481    3848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:31:36.598014    3848 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:05 /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:31:36.598056    3848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:31:36.602322    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 16:31:36.611545    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1498.pem && ln -fs /usr/share/ca-certificates/1498.pem /etc/ssl/certs/1498.pem"
	I0815 16:31:36.620267    3848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1498.pem
	I0815 16:31:36.623763    3848 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 23:14 /usr/share/ca-certificates/1498.pem
	I0815 16:31:36.623818    3848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1498.pem
	I0815 16:31:36.628404    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1498.pem /etc/ssl/certs/51391683.0"
	I0815 16:31:36.637260    3848 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 16:31:36.640760    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 16:31:36.645076    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 16:31:36.649285    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 16:31:36.653546    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 16:31:36.657801    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 16:31:36.662041    3848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 16:31:36.666218    3848 kubeadm.go:934] updating node {m03 192.169.0.7 8443 v1.31.0 docker true true} ...
	I0815 16:31:36.666285    3848 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-138000-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-138000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 16:31:36.666303    3848 kube-vip.go:115] generating kube-vip config ...
	I0815 16:31:36.666340    3848 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0815 16:31:36.678617    3848 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0815 16:31:36.678664    3848 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0815 16:31:36.678722    3848 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 16:31:36.686802    3848 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 16:31:36.686869    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0815 16:31:36.694600    3848 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0815 16:31:36.708358    3848 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 16:31:36.721865    3848 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0815 16:31:36.736604    3848 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0815 16:31:36.739496    3848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 16:31:36.748868    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:31:36.847387    3848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 16:31:36.862652    3848 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 16:31:36.862839    3848 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:31:36.884247    3848 out.go:177] * Verifying Kubernetes components...
	I0815 16:31:36.904597    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:31:37.032729    3848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 16:31:37.044674    3848 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19452-977/kubeconfig
	I0815 16:31:37.044869    3848 kapi.go:59] client config for ha-138000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/client.key", CAFile:"/Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, U
serAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x3a6cf60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0815 16:31:37.044913    3848 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0815 16:31:37.045078    3848 node_ready.go:35] waiting up to 6m0s for node "ha-138000-m03" to be "Ready" ...
	I0815 16:31:37.045127    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:37.045132    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:37.045138    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:37.045142    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:37.047558    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:37.545663    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:37.545719    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:37.545727    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:37.545756    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:37.548346    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:37.548775    3848 node_ready.go:49] node "ha-138000-m03" has status "Ready":"True"
	I0815 16:31:37.548786    3848 node_ready.go:38] duration metric: took 503.701087ms for node "ha-138000-m03" to be "Ready" ...
	I0815 16:31:37.548799    3848 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 16:31:37.548839    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0815 16:31:37.548848    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:37.548854    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:37.548859    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:37.555174    3848 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0815 16:31:37.561193    3848 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-dmgt5" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:37.561251    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-dmgt5
	I0815 16:31:37.561256    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:37.561262    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:37.561267    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:37.563487    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:37.564065    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:37.564072    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:37.564078    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:37.564081    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:37.566147    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:37.566458    3848 pod_ready.go:93] pod "coredns-6f6b679f8f-dmgt5" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:37.566468    3848 pod_ready.go:82] duration metric: took 5.259716ms for pod "coredns-6f6b679f8f-dmgt5" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:37.566475    3848 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-zc8jj" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:37.566514    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-zc8jj
	I0815 16:31:37.566519    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:37.566525    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:37.566529    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:37.568717    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:37.569347    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:37.569355    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:37.569361    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:37.569365    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:37.571508    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:37.571903    3848 pod_ready.go:93] pod "coredns-6f6b679f8f-zc8jj" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:37.571913    3848 pod_ready.go:82] duration metric: took 5.431792ms for pod "coredns-6f6b679f8f-zc8jj" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:37.571919    3848 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:37.571962    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000
	I0815 16:31:37.571967    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:37.571973    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:37.571976    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:37.574222    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:37.574650    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:37.574659    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:37.574665    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:37.574669    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:37.576917    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:37.577415    3848 pod_ready.go:93] pod "etcd-ha-138000" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:37.577426    3848 pod_ready.go:82] duration metric: took 5.501032ms for pod "etcd-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:37.577433    3848 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:37.577470    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m02
	I0815 16:31:37.577478    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:37.577485    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:37.577489    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:37.579610    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:37.580030    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:31:37.580038    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:37.580044    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:37.580049    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:37.582713    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:37.583250    3848 pod_ready.go:93] pod "etcd-ha-138000-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:37.583261    3848 pod_ready.go:82] duration metric: took 5.823471ms for pod "etcd-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:37.583269    3848 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:37.745749    3848 request.go:632] Waited for 162.439343ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:37.745806    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:37.745816    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:37.745824    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:37.745836    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:37.748134    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:37.945907    3848 request.go:632] Waited for 197.272516ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:37.945950    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:37.945956    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:37.945962    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:37.945966    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:37.948855    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:38.146195    3848 request.go:632] Waited for 62.814852ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:38.146243    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:38.146249    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:38.146296    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:38.146301    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:38.149137    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:38.346943    3848 request.go:632] Waited for 197.306674ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:38.346985    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:38.346994    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:38.347003    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:38.347010    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:38.349878    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:38.583459    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:38.583505    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:38.583514    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:38.583520    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:38.590031    3848 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0815 16:31:38.745745    3848 request.go:632] Waited for 155.336663ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:38.745818    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:38.745825    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:38.745831    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:38.745836    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:38.748530    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:39.083990    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:39.084003    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:39.084009    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:39.084013    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:39.086519    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:39.146468    3848 request.go:632] Waited for 59.248658ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:39.146510    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:39.146515    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:39.146521    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:39.146525    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:39.148504    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:39.583999    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:39.584017    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:39.584026    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:39.584029    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:39.589510    3848 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 16:31:39.590427    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:39.590438    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:39.590445    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:39.590449    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:39.592655    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:39.593056    3848 pod_ready.go:103] pod "etcd-ha-138000-m03" in "kube-system" namespace has status "Ready":"False"
	I0815 16:31:40.084185    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:40.084202    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:40.084209    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:40.084214    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:40.086419    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:40.087158    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:40.087166    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:40.087172    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:40.087196    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:40.088975    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:40.584037    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:40.584051    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:40.584058    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:40.584061    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:40.586450    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:40.586944    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:40.586952    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:40.586958    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:40.586963    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:40.589014    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:41.083405    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:41.083421    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:41.083427    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:41.083433    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:41.086228    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:41.086971    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:41.086978    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:41.086985    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:41.086990    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:41.097843    3848 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0815 16:31:41.583963    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:41.583987    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:41.583999    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:41.584008    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:41.587268    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:41.588066    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:41.588074    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:41.588079    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:41.588083    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:41.589716    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:42.083443    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:42.083462    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:42.083471    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:42.083482    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:42.085751    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:42.086179    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:42.086187    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:42.086194    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:42.086197    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:42.087825    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:42.088133    3848 pod_ready.go:103] pod "etcd-ha-138000-m03" in "kube-system" namespace has status "Ready":"False"
	I0815 16:31:42.584042    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:42.584070    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:42.584081    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:42.584089    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:42.587530    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:42.588287    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:42.588295    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:42.588301    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:42.588305    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:42.589868    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:43.085149    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:43.085164    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:43.085170    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:43.085174    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:43.087319    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:43.087818    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:43.087825    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:43.087831    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:43.087834    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:43.089562    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:43.583720    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:43.583737    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:43.583744    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:43.583747    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:43.586238    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:43.586831    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:43.586842    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:43.586849    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:43.586852    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:43.589092    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:44.084178    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:44.084189    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:44.084195    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:44.084198    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:44.086364    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:44.086790    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:44.086798    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:44.086805    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:44.086809    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:44.088812    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:44.089107    3848 pod_ready.go:103] pod "etcd-ha-138000-m03" in "kube-system" namespace has status "Ready":"False"
	I0815 16:31:44.584718    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:44.584743    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:44.584755    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:44.584763    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:44.587851    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:44.588606    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:44.588615    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:44.588621    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:44.588624    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:44.590403    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:45.083471    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:45.083486    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:45.083492    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:45.083496    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:45.085722    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:45.086170    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:45.086177    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:45.086186    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:45.086189    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:45.087992    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:45.583684    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:45.583761    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:45.583775    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:45.583782    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:45.586696    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:45.587281    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:45.587292    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:45.587300    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:45.587305    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:45.588851    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:46.083567    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:46.083581    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:46.083590    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:46.083595    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:46.086254    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:46.086706    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:46.086714    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:46.086720    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:46.086724    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:46.088505    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:46.583431    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:46.583454    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:46.583474    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:46.583477    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:46.586641    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:46.587367    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:46.587376    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:46.587383    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:46.587389    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:46.590271    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:46.590924    3848 pod_ready.go:103] pod "etcd-ha-138000-m03" in "kube-system" namespace has status "Ready":"False"
	I0815 16:31:47.085070    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:47.085088    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:47.085094    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:47.085097    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:47.087411    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:47.087834    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:47.087841    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:47.087847    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:47.087856    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:47.089857    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:47.583460    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:47.583510    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:47.583537    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:47.583547    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:47.586412    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:47.587147    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:47.587155    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:47.587161    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:47.587164    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:47.589077    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:48.084130    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:48.084172    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:48.084180    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:48.084184    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:48.086241    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:48.086700    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:48.086708    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:48.086715    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:48.086719    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:48.088392    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:48.583712    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:48.583726    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:48.583733    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:48.583736    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:48.585950    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:48.586404    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:48.586411    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:48.586417    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:48.586420    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:48.588064    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:49.084795    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:49.084810    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:49.084817    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:49.084821    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:49.087201    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:49.087638    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:49.087646    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:49.087651    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:49.087655    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:49.089294    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:49.089762    3848 pod_ready.go:103] pod "etcd-ha-138000-m03" in "kube-system" namespace has status "Ready":"False"
	I0815 16:31:49.584532    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:49.584586    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:49.584596    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:49.584602    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:49.586828    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:49.587368    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:49.587376    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:49.587381    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:49.587386    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:49.589092    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:50.084677    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:50.084702    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:50.084714    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:50.084720    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:50.090233    3848 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 16:31:50.091082    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:50.091090    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:50.091095    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:50.091098    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:50.093397    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:50.584557    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:50.584594    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:50.584607    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:50.584614    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:50.587331    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:50.588105    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:50.588113    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:50.588119    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:50.588122    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:50.589783    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:51.084222    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:51.084238    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:51.084245    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:51.084249    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:51.086498    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:51.086853    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:51.086860    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:51.086866    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:51.086869    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:51.088548    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:51.583648    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:51.583662    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:51.583669    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:51.583673    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:51.585837    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:51.586356    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:51.586364    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:51.586370    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:51.586374    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:51.588027    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:51.588324    3848 pod_ready.go:103] pod "etcd-ha-138000-m03" in "kube-system" namespace has status "Ready":"False"
	I0815 16:31:52.083439    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:31:52.083464    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:52.083477    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:52.083486    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:52.086839    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:52.087326    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:52.087334    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:52.087340    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:52.087344    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:52.089021    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:52.089421    3848 pod_ready.go:93] pod "etcd-ha-138000-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:52.089431    3848 pod_ready.go:82] duration metric: took 14.506206257s for pod "etcd-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:52.089443    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:52.089476    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000
	I0815 16:31:52.089481    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:52.089487    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:52.089490    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:52.091044    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:52.091506    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:52.091513    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:52.091519    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:52.091522    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:52.093067    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:52.093523    3848 pod_ready.go:93] pod "kube-apiserver-ha-138000" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:52.093534    3848 pod_ready.go:82] duration metric: took 4.083615ms for pod "kube-apiserver-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:52.093540    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:52.093569    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:31:52.093574    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:52.093579    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:52.093583    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:52.096079    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:52.096682    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:31:52.096689    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:52.096695    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:52.096698    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:52.098629    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:52.099014    3848 pod_ready.go:93] pod "kube-apiserver-ha-138000-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:52.099023    3848 pod_ready.go:82] duration metric: took 5.477344ms for pod "kube-apiserver-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:52.099030    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:52.099060    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m03
	I0815 16:31:52.099065    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:52.099071    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:52.099075    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:52.100773    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:52.101171    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:52.101178    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:52.101184    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:52.101188    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:52.108504    3848 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0815 16:31:52.599355    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m03
	I0815 16:31:52.599371    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:52.599378    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:52.599380    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:52.603474    3848 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 16:31:52.603827    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:52.603834    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:52.603839    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:52.603842    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:52.607400    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:53.100426    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m03
	I0815 16:31:53.100452    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:53.100465    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:53.100469    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:53.103591    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:53.103977    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:53.103985    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:53.103991    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:53.103995    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:53.105550    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:53.600030    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m03
	I0815 16:31:53.600056    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:53.600098    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:53.600106    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:53.603820    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:53.604279    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:53.604287    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:53.604292    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:53.604302    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:53.605948    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:54.100215    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m03
	I0815 16:31:54.100240    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:54.100248    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:54.100254    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:54.103639    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:54.104211    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:54.104222    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:54.104230    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:54.104236    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:54.106285    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:54.106596    3848 pod_ready.go:103] pod "kube-apiserver-ha-138000-m03" in "kube-system" namespace has status "Ready":"False"
	I0815 16:31:54.600238    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m03
	I0815 16:31:54.600262    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:54.600275    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:54.600280    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:54.603528    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:54.604248    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:54.604259    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:54.604268    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:54.604276    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:54.606261    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:54.606605    3848 pod_ready.go:93] pod "kube-apiserver-ha-138000-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:54.606614    3848 pod_ready.go:82] duration metric: took 2.507587207s for pod "kube-apiserver-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:54.606621    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:54.606652    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:31:54.606657    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:54.606663    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:54.606677    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:54.608196    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:54.608645    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:54.608652    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:54.608658    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:54.608661    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:54.610174    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:54.610543    3848 pod_ready.go:93] pod "kube-controller-manager-ha-138000" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:54.610551    3848 pod_ready.go:82] duration metric: took 3.924647ms for pod "kube-controller-manager-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:54.610565    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:54.610597    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000-m02
	I0815 16:31:54.610601    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:54.610607    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:54.610611    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:54.612220    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:54.612637    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:31:54.612644    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:54.612648    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:54.612652    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:54.614115    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:54.614453    3848 pod_ready.go:93] pod "kube-controller-manager-ha-138000-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:54.614461    3848 pod_ready.go:82] duration metric: took 3.890604ms for pod "kube-controller-manager-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:54.614467    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:54.685393    3848 request.go:632] Waited for 70.886034ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000-m03
	I0815 16:31:54.685542    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000-m03
	I0815 16:31:54.685554    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:54.685565    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:54.685572    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:54.689462    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:54.884047    3848 request.go:632] Waited for 194.079873ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:54.884179    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:54.884194    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:54.884206    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:54.884216    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:54.887378    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:54.887638    3848 pod_ready.go:93] pod "kube-controller-manager-ha-138000-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:54.887648    3848 pod_ready.go:82] duration metric: took 273.176916ms for pod "kube-controller-manager-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:54.887655    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cznkn" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:55.084696    3848 request.go:632] Waited for 197.006461ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cznkn
	I0815 16:31:55.084754    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cznkn
	I0815 16:31:55.084760    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:55.084766    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:55.084770    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:55.086486    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:31:55.284932    3848 request.go:632] Waited for 198.019424ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:55.285014    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:55.285023    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:55.285031    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:55.285034    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:55.287587    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:55.288003    3848 pod_ready.go:93] pod "kube-proxy-cznkn" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:55.288012    3848 pod_ready.go:82] duration metric: took 400.352996ms for pod "kube-proxy-cznkn" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:55.288019    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kxghx" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:55.484813    3848 request.go:632] Waited for 196.749045ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kxghx
	I0815 16:31:55.484909    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kxghx
	I0815 16:31:55.484933    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:55.484946    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:55.484952    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:55.487936    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:55.684903    3848 request.go:632] Waited for 196.468256ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:55.684989    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:55.684999    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:55.685010    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:55.685019    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:55.688164    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:55.688606    3848 pod_ready.go:93] pod "kube-proxy-kxghx" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:55.688619    3848 pod_ready.go:82] duration metric: took 400.595564ms for pod "kube-proxy-kxghx" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:55.688628    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qpth7" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:55.884647    3848 request.go:632] Waited for 195.972571ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qpth7
	I0815 16:31:55.884703    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qpth7
	I0815 16:31:55.884734    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:55.884828    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:55.884842    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:55.887780    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:56.085059    3848 request.go:632] Waited for 196.76753ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m04
	I0815 16:31:56.085155    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m04
	I0815 16:31:56.085166    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:56.085178    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:56.085187    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:56.088438    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:56.088843    3848 pod_ready.go:98] node "ha-138000-m04" hosting pod "kube-proxy-qpth7" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-138000-m04" has status "Ready":"Unknown"
	I0815 16:31:56.088858    3848 pod_ready.go:82] duration metric: took 400.224535ms for pod "kube-proxy-qpth7" in "kube-system" namespace to be "Ready" ...
	E0815 16:31:56.088867    3848 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-138000-m04" hosting pod "kube-proxy-qpth7" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-138000-m04" has status "Ready":"Unknown"
	I0815 16:31:56.088873    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tf79g" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:56.284412    3848 request.go:632] Waited for 195.467169ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tf79g
	I0815 16:31:56.284533    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tf79g
	I0815 16:31:56.284544    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:56.284556    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:56.284567    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:56.287997    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:56.483641    3848 request.go:632] Waited for 195.132786ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:31:56.483717    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:31:56.483778    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:56.483801    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:56.483810    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:56.486922    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:56.487377    3848 pod_ready.go:93] pod "kube-proxy-tf79g" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:56.487387    3848 pod_ready.go:82] duration metric: took 398.50917ms for pod "kube-proxy-tf79g" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:56.487394    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:56.684509    3848 request.go:632] Waited for 197.075187ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000
	I0815 16:31:56.684584    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000
	I0815 16:31:56.684592    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:56.684600    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:56.684606    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:56.687177    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:56.884267    3848 request.go:632] Waited for 196.705982ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:56.884375    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:31:56.884384    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:56.884392    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:56.884396    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:56.886486    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:56.886846    3848 pod_ready.go:93] pod "kube-scheduler-ha-138000" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:56.886854    3848 pod_ready.go:82] duration metric: took 399.455831ms for pod "kube-scheduler-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:56.886860    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:57.083869    3848 request.go:632] Waited for 196.961301ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000-m02
	I0815 16:31:57.083950    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000-m02
	I0815 16:31:57.083960    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:57.083983    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:57.083992    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:57.087081    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:57.285517    3848 request.go:632] Waited for 197.962246ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:31:57.285639    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:31:57.285649    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:57.285659    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:57.285667    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:57.288947    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:57.289317    3848 pod_ready.go:93] pod "kube-scheduler-ha-138000-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:57.289331    3848 pod_ready.go:82] duration metric: took 402.465658ms for pod "kube-scheduler-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:57.289340    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:57.483919    3848 request.go:632] Waited for 194.531212ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000-m03
	I0815 16:31:57.484018    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000-m03
	I0815 16:31:57.484029    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:57.484041    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:57.484049    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:57.486736    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:31:57.683533    3848 request.go:632] Waited for 196.372817ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:57.683619    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:31:57.683630    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:57.683642    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:57.683649    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:57.686767    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:57.687131    3848 pod_ready.go:93] pod "kube-scheduler-ha-138000-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 16:31:57.687146    3848 pod_ready.go:82] duration metric: took 397.799248ms for pod "kube-scheduler-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:31:57.687155    3848 pod_ready.go:39] duration metric: took 20.138416099s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 16:31:57.687170    3848 api_server.go:52] waiting for apiserver process to appear ...
	I0815 16:31:57.687237    3848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 16:31:57.700597    3848 api_server.go:72] duration metric: took 20.837986375s to wait for apiserver process to appear ...
	I0815 16:31:57.700610    3848 api_server.go:88] waiting for apiserver healthz status ...
	I0815 16:31:57.700622    3848 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0815 16:31:57.703621    3848 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0815 16:31:57.703653    3848 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0815 16:31:57.703658    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:57.703664    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:57.703670    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:57.704168    3848 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0815 16:31:57.704198    3848 api_server.go:141] control plane version: v1.31.0
	I0815 16:31:57.704207    3848 api_server.go:131] duration metric: took 3.590796ms to wait for apiserver health ...
	I0815 16:31:57.704213    3848 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 16:31:57.884532    3848 request.go:632] Waited for 180.27549ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0815 16:31:57.884634    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0815 16:31:57.884645    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:57.884656    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:57.884661    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:57.889257    3848 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 16:31:57.894492    3848 system_pods.go:59] 26 kube-system pods found
	I0815 16:31:57.894504    3848 system_pods.go:61] "coredns-6f6b679f8f-dmgt5" [47d73953-ec2c-4f17-b2b8-d6a9b5e5a316] Running
	I0815 16:31:57.894508    3848 system_pods.go:61] "coredns-6f6b679f8f-zc8jj" [b4a9df39-b09d-4bc3-97f6-b3176ff8e842] Running
	I0815 16:31:57.894511    3848 system_pods.go:61] "etcd-ha-138000" [c2e7e508-a157-4581-a446-2f48e51c0c16] Running
	I0815 16:31:57.894514    3848 system_pods.go:61] "etcd-ha-138000-m02" [4f371d3f-821b-4eae-ab52-5b75d90acd67] Running
	I0815 16:31:57.894516    3848 system_pods.go:61] "etcd-ha-138000-m03" [ebdd90b3-c3b2-44c2-a411-4f6afa67a455] Running
	I0815 16:31:57.894519    3848 system_pods.go:61] "kindnet-77dc6" [24bfc069-9446-43a7-aa61-308c1a62a20e] Running
	I0815 16:31:57.894522    3848 system_pods.go:61] "kindnet-dsvxt" [7645852b-8535-4cb4-8324-15c9b55176e3] Running
	I0815 16:31:57.894525    3848 system_pods.go:61] "kindnet-m887r" [ba31865b-c712-47a8-9fd8-06420270ac8b] Running
	I0815 16:31:57.894527    3848 system_pods.go:61] "kindnet-z6mnx" [fc6211a0-df99-4350-90ba-a18f74bd1bfc] Running
	I0815 16:31:57.894530    3848 system_pods.go:61] "kube-apiserver-ha-138000" [bbc684f7-d8e2-44f2-987a-df9d05cf54fc] Running
	I0815 16:31:57.894534    3848 system_pods.go:61] "kube-apiserver-ha-138000-m02" [c30e129b-f475-4131-a25e-7eeecb39cbea] Running
	I0815 16:31:57.894537    3848 system_pods.go:61] "kube-apiserver-ha-138000-m03" [90304f02-10f2-446d-b731-674df5602401] Running
	I0815 16:31:57.894541    3848 system_pods.go:61] "kube-controller-manager-ha-138000" [1d4148d1-9798-4662-91de-d9a7dae634e2] Running
	I0815 16:31:57.894545    3848 system_pods.go:61] "kube-controller-manager-ha-138000-m02" [ad693dae-cb0a-46ca-955b-33a9465d911a] Running
	I0815 16:31:57.894547    3848 system_pods.go:61] "kube-controller-manager-ha-138000-m03" [11aa509a-dade-4b66-b157-4d4cccb7f349] Running
	I0815 16:31:57.894550    3848 system_pods.go:61] "kube-proxy-cznkn" [61cd1a9a-80fb-4d0e-a4dd-f5ed2d6ddc0f] Running
	I0815 16:31:57.894553    3848 system_pods.go:61] "kube-proxy-kxghx" [27f61dcc-2812-4038-af2a-aa9f46033869] Running
	I0815 16:31:57.894555    3848 system_pods.go:61] "kube-proxy-qpth7" [a343f80b-0fe9-4c88-9782-5fbf9a6170d1] Running
	I0815 16:31:57.894558    3848 system_pods.go:61] "kube-proxy-tf79g" [ef8a2aeb-55c4-436e-a306-da8b93c9707f] Running
	I0815 16:31:57.894560    3848 system_pods.go:61] "kube-scheduler-ha-138000" [ee1d3eb0-596d-4bf0-bed4-dbd38b286e38] Running
	I0815 16:31:57.894563    3848 system_pods.go:61] "kube-scheduler-ha-138000-m02" [1cc29309-49ad-427e-862a-758cc5711eef] Running
	I0815 16:31:57.894566    3848 system_pods.go:61] "kube-scheduler-ha-138000-m03" [2e3efb3c-6903-4306-adec-d4a47b3a56bd] Running
	I0815 16:31:57.894572    3848 system_pods.go:61] "kube-vip-ha-138000" [1b8c66e5-ffaa-4d4b-a220-8eb664b79d91] Running
	I0815 16:31:57.894575    3848 system_pods.go:61] "kube-vip-ha-138000-m02" [a0fe2ba6-1ace-456f-ab2e-15c43303dcad] Running
	I0815 16:31:57.894578    3848 system_pods.go:61] "kube-vip-ha-138000-m03" [44fe2bd5-bd48-4f86-b38a-7e4b3554174c] Running
	I0815 16:31:57.894581    3848 system_pods.go:61] "storage-provisioner" [35f3a9d7-cb68-4b10-82a2-1bd72d8d1aa6] Running
	I0815 16:31:57.894585    3848 system_pods.go:74] duration metric: took 190.369062ms to wait for pod list to return data ...
	I0815 16:31:57.894590    3848 default_sa.go:34] waiting for default service account to be created ...
	I0815 16:31:58.083903    3848 request.go:632] Waited for 189.255195ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0815 16:31:58.083992    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0815 16:31:58.084004    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:58.084016    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:58.084024    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:58.087624    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:58.087687    3848 default_sa.go:45] found service account: "default"
	I0815 16:31:58.087696    3848 default_sa.go:55] duration metric: took 193.101509ms for default service account to be created ...
	I0815 16:31:58.087703    3848 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 16:31:58.284595    3848 request.go:632] Waited for 196.812141ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0815 16:31:58.284716    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0815 16:31:58.284728    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:58.284740    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:58.284748    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:58.290177    3848 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 16:31:58.295724    3848 system_pods.go:86] 26 kube-system pods found
	I0815 16:31:58.295738    3848 system_pods.go:89] "coredns-6f6b679f8f-dmgt5" [47d73953-ec2c-4f17-b2b8-d6a9b5e5a316] Running
	I0815 16:31:58.295742    3848 system_pods.go:89] "coredns-6f6b679f8f-zc8jj" [b4a9df39-b09d-4bc3-97f6-b3176ff8e842] Running
	I0815 16:31:58.295747    3848 system_pods.go:89] "etcd-ha-138000" [c2e7e508-a157-4581-a446-2f48e51c0c16] Running
	I0815 16:31:58.295759    3848 system_pods.go:89] "etcd-ha-138000-m02" [4f371d3f-821b-4eae-ab52-5b75d90acd67] Running
	I0815 16:31:58.295765    3848 system_pods.go:89] "etcd-ha-138000-m03" [ebdd90b3-c3b2-44c2-a411-4f6afa67a455] Running
	I0815 16:31:58.295768    3848 system_pods.go:89] "kindnet-77dc6" [24bfc069-9446-43a7-aa61-308c1a62a20e] Running
	I0815 16:31:58.295779    3848 system_pods.go:89] "kindnet-dsvxt" [7645852b-8535-4cb4-8324-15c9b55176e3] Running
	I0815 16:31:58.295783    3848 system_pods.go:89] "kindnet-m887r" [ba31865b-c712-47a8-9fd8-06420270ac8b] Running
	I0815 16:31:58.295786    3848 system_pods.go:89] "kindnet-z6mnx" [fc6211a0-df99-4350-90ba-a18f74bd1bfc] Running
	I0815 16:31:58.295789    3848 system_pods.go:89] "kube-apiserver-ha-138000" [bbc684f7-d8e2-44f2-987a-df9d05cf54fc] Running
	I0815 16:31:58.295791    3848 system_pods.go:89] "kube-apiserver-ha-138000-m02" [c30e129b-f475-4131-a25e-7eeecb39cbea] Running
	I0815 16:31:58.295795    3848 system_pods.go:89] "kube-apiserver-ha-138000-m03" [90304f02-10f2-446d-b731-674df5602401] Running
	I0815 16:31:58.295798    3848 system_pods.go:89] "kube-controller-manager-ha-138000" [1d4148d1-9798-4662-91de-d9a7dae634e2] Running
	I0815 16:31:58.295801    3848 system_pods.go:89] "kube-controller-manager-ha-138000-m02" [ad693dae-cb0a-46ca-955b-33a9465d911a] Running
	I0815 16:31:58.295804    3848 system_pods.go:89] "kube-controller-manager-ha-138000-m03" [11aa509a-dade-4b66-b157-4d4cccb7f349] Running
	I0815 16:31:58.295807    3848 system_pods.go:89] "kube-proxy-cznkn" [61cd1a9a-80fb-4d0e-a4dd-f5ed2d6ddc0f] Running
	I0815 16:31:58.295814    3848 system_pods.go:89] "kube-proxy-kxghx" [27f61dcc-2812-4038-af2a-aa9f46033869] Running
	I0815 16:31:58.295818    3848 system_pods.go:89] "kube-proxy-qpth7" [a343f80b-0fe9-4c88-9782-5fbf9a6170d1] Running
	I0815 16:31:58.295821    3848 system_pods.go:89] "kube-proxy-tf79g" [ef8a2aeb-55c4-436e-a306-da8b93c9707f] Running
	I0815 16:31:58.295824    3848 system_pods.go:89] "kube-scheduler-ha-138000" [ee1d3eb0-596d-4bf0-bed4-dbd38b286e38] Running
	I0815 16:31:58.295827    3848 system_pods.go:89] "kube-scheduler-ha-138000-m02" [1cc29309-49ad-427e-862a-758cc5711eef] Running
	I0815 16:31:58.295830    3848 system_pods.go:89] "kube-scheduler-ha-138000-m03" [2e3efb3c-6903-4306-adec-d4a47b3a56bd] Running
	I0815 16:31:58.295833    3848 system_pods.go:89] "kube-vip-ha-138000" [1b8c66e5-ffaa-4d4b-a220-8eb664b79d91] Running
	I0815 16:31:58.295836    3848 system_pods.go:89] "kube-vip-ha-138000-m02" [a0fe2ba6-1ace-456f-ab2e-15c43303dcad] Running
	I0815 16:31:58.295838    3848 system_pods.go:89] "kube-vip-ha-138000-m03" [44fe2bd5-bd48-4f86-b38a-7e4b3554174c] Running
	I0815 16:31:58.295841    3848 system_pods.go:89] "storage-provisioner" [35f3a9d7-cb68-4b10-82a2-1bd72d8d1aa6] Running
	I0815 16:31:58.295845    3848 system_pods.go:126] duration metric: took 208.13908ms to wait for k8s-apps to be running ...
	I0815 16:31:58.295851    3848 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 16:31:58.295902    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 16:31:58.307696    3848 system_svc.go:56] duration metric: took 11.840404ms WaitForService to wait for kubelet
	I0815 16:31:58.307710    3848 kubeadm.go:582] duration metric: took 21.445104276s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 16:31:58.307721    3848 node_conditions.go:102] verifying NodePressure condition ...
	I0815 16:31:58.483467    3848 request.go:632] Waited for 175.699042ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0815 16:31:58.483523    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0815 16:31:58.483531    3848 round_trippers.go:469] Request Headers:
	I0815 16:31:58.483546    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:31:58.483605    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:31:58.487271    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:31:58.488234    3848 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 16:31:58.488246    3848 node_conditions.go:123] node cpu capacity is 2
	I0815 16:31:58.488253    3848 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 16:31:58.488256    3848 node_conditions.go:123] node cpu capacity is 2
	I0815 16:31:58.488259    3848 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 16:31:58.488263    3848 node_conditions.go:123] node cpu capacity is 2
	I0815 16:31:58.488266    3848 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 16:31:58.488269    3848 node_conditions.go:123] node cpu capacity is 2
	I0815 16:31:58.488272    3848 node_conditions.go:105] duration metric: took 180.547852ms to run NodePressure ...
	I0815 16:31:58.488280    3848 start.go:241] waiting for startup goroutines ...
	I0815 16:31:58.488303    3848 start.go:255] writing updated cluster config ...
	I0815 16:31:58.511626    3848 out.go:201] 
	I0815 16:31:58.532028    3848 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:31:58.532166    3848 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/config.json ...
	I0815 16:31:58.553589    3848 out.go:177] * Starting "ha-138000-m04" worker node in "ha-138000" cluster
	I0815 16:31:58.594430    3848 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 16:31:58.594502    3848 cache.go:56] Caching tarball of preloaded images
	I0815 16:31:58.594676    3848 preload.go:172] Found /Users/jenkins/minikube-integration/19452-977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0815 16:31:58.594694    3848 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 16:31:58.594833    3848 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/config.json ...
	I0815 16:31:58.595712    3848 start.go:360] acquireMachinesLock for ha-138000-m04: {Name:mkac8034c8a60617271f2754a03dcaf74fc4fef7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 16:31:58.595816    3848 start.go:364] duration metric: took 79.794µs to acquireMachinesLock for "ha-138000-m04"
	I0815 16:31:58.595841    3848 start.go:96] Skipping create...Using existing machine configuration
	I0815 16:31:58.595851    3848 fix.go:54] fixHost starting: m04
	I0815 16:31:58.596274    3848 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:31:58.596311    3848 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:31:58.605762    3848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52311
	I0815 16:31:58.606137    3848 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:31:58.606475    3848 main.go:141] libmachine: Using API Version  1
	I0815 16:31:58.606484    3848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:31:58.606737    3848 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:31:58.606878    3848 main.go:141] libmachine: (ha-138000-m04) Calling .DriverName
	I0815 16:31:58.606971    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetState
	I0815 16:31:58.607059    3848 main.go:141] libmachine: (ha-138000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:31:58.607149    3848 main.go:141] libmachine: (ha-138000-m04) DBG | hyperkit pid from json: 3240
	I0815 16:31:58.608054    3848 main.go:141] libmachine: (ha-138000-m04) DBG | hyperkit pid 3240 missing from process table
	I0815 16:31:58.608090    3848 fix.go:112] recreateIfNeeded on ha-138000-m04: state=Stopped err=<nil>
	I0815 16:31:58.608101    3848 main.go:141] libmachine: (ha-138000-m04) Calling .DriverName
	W0815 16:31:58.608193    3848 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 16:31:58.629670    3848 out.go:177] * Restarting existing hyperkit VM for "ha-138000-m04" ...
	I0815 16:31:58.671397    3848 main.go:141] libmachine: (ha-138000-m04) Calling .Start
	I0815 16:31:58.671607    3848 main.go:141] libmachine: (ha-138000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:31:58.671648    3848 main.go:141] libmachine: (ha-138000-m04) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/hyperkit.pid
	I0815 16:31:58.671760    3848 main.go:141] libmachine: (ha-138000-m04) DBG | Using UUID e49817f2-f6c4-46a0-a846-8a8b2da04ea9
	I0815 16:31:58.700620    3848 main.go:141] libmachine: (ha-138000-m04) DBG | Generated MAC 66:d1:6e:6f:24:26
	I0815 16:31:58.700645    3848 main.go:141] libmachine: (ha-138000-m04) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000
	I0815 16:31:58.700779    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:58 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"e49817f2-f6c4-46a0-a846-8a8b2da04ea9", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002ad680)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0815 16:31:58.700809    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:58 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"e49817f2-f6c4-46a0-a846-8a8b2da04ea9", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002ad680)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0815 16:31:58.700889    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:58 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "e49817f2-f6c4-46a0-a846-8a8b2da04ea9", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/ha-138000-m04.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/tty,log=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/bzimage,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-13
8000-m04/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000"}
	I0815 16:31:58.700927    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:58 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U e49817f2-f6c4-46a0-a846-8a8b2da04ea9 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/ha-138000-m04.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/tty,log=/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/console-ring -f kexec,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/bzimage,/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/initrd,earlyprintk=serial loglevel=3 console=ttyS0 co
nsole=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-138000"
	I0815 16:31:58.700973    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:58 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0815 16:31:58.702332    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:58 DEBUG: hyperkit: Pid is 4201
	I0815 16:31:58.702793    3848 main.go:141] libmachine: (ha-138000-m04) DBG | Attempt 0
	I0815 16:31:58.702829    3848 main.go:141] libmachine: (ha-138000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:31:58.702904    3848 main.go:141] libmachine: (ha-138000-m04) DBG | hyperkit pid from json: 4201
	I0815 16:31:58.703953    3848 main.go:141] libmachine: (ha-138000-m04) DBG | Searching for 66:d1:6e:6f:24:26 in /var/db/dhcpd_leases ...
	I0815 16:31:58.704027    3848 main.go:141] libmachine: (ha-138000-m04) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0815 16:31:58.704048    3848 main.go:141] libmachine: (ha-138000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:18:89:2a:2d:99 ID:1,9e:18:89:2a:2d:99 Lease:0x66bfe14f}
	I0815 16:31:58.704066    3848 main.go:141] libmachine: (ha-138000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:c2:e9:d7:1c:58 ID:1,9a:c2:e9:d7:1c:58 Lease:0x66bfe10e}
	I0815 16:31:58.704081    3848 main.go:141] libmachine: (ha-138000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:4d:cd:54:35:15 ID:1,66:4d:cd:54:35:15 Lease:0x66bfe0fb}
	I0815 16:31:58.704095    3848 main.go:141] libmachine: (ha-138000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:66:d1:6e:6f:24:26 ID:1,66:d1:6e:6f:24:26 Lease:0x66be8e15}
	I0815 16:31:58.704105    3848 main.go:141] libmachine: (ha-138000-m04) DBG | Found match: 66:d1:6e:6f:24:26
	I0815 16:31:58.704118    3848 main.go:141] libmachine: (ha-138000-m04) DBG | IP: 192.169.0.8
	I0815 16:31:58.704138    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetConfigRaw
	I0815 16:31:58.704996    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetIP
	I0815 16:31:58.705244    3848 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/config.json ...
	I0815 16:31:58.705856    3848 machine.go:93] provisionDockerMachine start ...
	I0815 16:31:58.705869    3848 main.go:141] libmachine: (ha-138000-m04) Calling .DriverName
	I0815 16:31:58.705978    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHHostname
	I0815 16:31:58.706098    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHPort
	I0815 16:31:58.706206    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:31:58.706333    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:31:58.706439    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHUsername
	I0815 16:31:58.706614    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:31:58.706786    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0815 16:31:58.706796    3848 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 16:31:58.710462    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:58 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0815 16:31:58.720101    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:58 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0815 16:31:58.720991    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0815 16:31:58.721013    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0815 16:31:58.721022    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0815 16:31:58.721032    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0815 16:31:59.105309    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:59 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0815 16:31:59.105335    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:59 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0815 16:31:59.220059    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0815 16:31:59.220079    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0815 16:31:59.220089    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0815 16:31:59.220095    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0815 16:31:59.220911    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:59 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0815 16:31:59.220942    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:31:59 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0815 16:32:04.889008    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:32:04 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0815 16:32:04.889030    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:32:04 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0815 16:32:04.889049    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:32:04 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0815 16:32:04.912331    3848 main.go:141] libmachine: (ha-138000-m04) DBG | 2024/08/15 16:32:04 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0815 16:32:33.787060    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 16:32:33.787084    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetMachineName
	I0815 16:32:33.787215    3848 buildroot.go:166] provisioning hostname "ha-138000-m04"
	I0815 16:32:33.787226    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetMachineName
	I0815 16:32:33.787318    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHHostname
	I0815 16:32:33.787397    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHPort
	I0815 16:32:33.787483    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:33.787564    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:33.787640    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHUsername
	I0815 16:32:33.787765    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:32:33.787937    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0815 16:32:33.787945    3848 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-138000-m04 && echo "ha-138000-m04" | sudo tee /etc/hostname
	I0815 16:32:33.847992    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-138000-m04
	
	I0815 16:32:33.848008    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHHostname
	I0815 16:32:33.848137    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHPort
	I0815 16:32:33.848240    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:33.848322    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:33.848426    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHUsername
	I0815 16:32:33.848548    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:32:33.848705    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0815 16:32:33.848716    3848 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-138000-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-138000-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-138000-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 16:32:33.904813    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 16:32:33.904838    3848 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19452-977/.minikube CaCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19452-977/.minikube}
	I0815 16:32:33.904848    3848 buildroot.go:174] setting up certificates
	I0815 16:32:33.904853    3848 provision.go:84] configureAuth start
	I0815 16:32:33.904860    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetMachineName
	I0815 16:32:33.904995    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetIP
	I0815 16:32:33.905084    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHHostname
	I0815 16:32:33.905176    3848 provision.go:143] copyHostCerts
	I0815 16:32:33.905203    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem
	I0815 16:32:33.905264    3848 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem, removing ...
	I0815 16:32:33.905280    3848 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem
	I0815 16:32:33.915862    3848 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/ca.pem (1082 bytes)
	I0815 16:32:33.936338    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem
	I0815 16:32:33.936399    3848 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem, removing ...
	I0815 16:32:33.936405    3848 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem
	I0815 16:32:33.960707    3848 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/cert.pem (1123 bytes)
	I0815 16:32:33.961241    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem
	I0815 16:32:33.961296    3848 exec_runner.go:144] found /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem, removing ...
	I0815 16:32:33.961303    3848 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem
	I0815 16:32:33.961391    3848 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19452-977/.minikube/key.pem (1679 bytes)
	I0815 16:32:33.961771    3848 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem org=jenkins.ha-138000-m04 san=[127.0.0.1 192.169.0.8 ha-138000-m04 localhost minikube]
	I0815 16:32:34.048242    3848 provision.go:177] copyRemoteCerts
	I0815 16:32:34.048297    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 16:32:34.048312    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHHostname
	I0815 16:32:34.048461    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHPort
	I0815 16:32:34.048558    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:34.048644    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHUsername
	I0815 16:32:34.048725    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/id_rsa Username:docker}
	I0815 16:32:34.079744    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0815 16:32:34.079820    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 16:32:34.099832    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0815 16:32:34.099904    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0815 16:32:34.119955    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0815 16:32:34.120035    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 16:32:34.140743    3848 provision.go:87] duration metric: took 235.600662ms to configureAuth
	I0815 16:32:34.140757    3848 buildroot.go:189] setting minikube options for container-runtime
	I0815 16:32:34.140940    3848 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:32:34.140975    3848 main.go:141] libmachine: (ha-138000-m04) Calling .DriverName
	I0815 16:32:34.141106    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHHostname
	I0815 16:32:34.141218    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHPort
	I0815 16:32:34.141307    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:34.141393    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:34.141471    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHUsername
	I0815 16:32:34.141580    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:32:34.141705    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0815 16:32:34.141713    3848 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0815 16:32:34.191590    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0815 16:32:34.191604    3848 buildroot.go:70] root file system type: tmpfs
	I0815 16:32:34.191676    3848 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0815 16:32:34.191686    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHHostname
	I0815 16:32:34.191824    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHPort
	I0815 16:32:34.191939    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:34.192031    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:34.192133    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHUsername
	I0815 16:32:34.192260    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:32:34.192405    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0815 16:32:34.192449    3848 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0815 16:32:34.253544    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	Environment=NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0815 16:32:34.253562    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHHostname
	I0815 16:32:34.253696    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHPort
	I0815 16:32:34.253789    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:34.253863    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:34.253953    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHUsername
	I0815 16:32:34.254084    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:32:34.254223    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0815 16:32:34.254235    3848 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0815 16:32:35.839568    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0815 16:32:35.839584    3848 machine.go:96] duration metric: took 37.11179722s to provisionDockerMachine
	I0815 16:32:35.839591    3848 start.go:293] postStartSetup for "ha-138000-m04" (driver="hyperkit")
	I0815 16:32:35.839597    3848 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 16:32:35.839606    3848 main.go:141] libmachine: (ha-138000-m04) Calling .DriverName
	I0815 16:32:35.839797    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 16:32:35.839811    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHHostname
	I0815 16:32:35.839906    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHPort
	I0815 16:32:35.839987    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:35.840069    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHUsername
	I0815 16:32:35.840139    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/id_rsa Username:docker}
	I0815 16:32:35.872247    3848 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 16:32:35.875358    3848 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 16:32:35.875369    3848 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19452-977/.minikube/addons for local assets ...
	I0815 16:32:35.875469    3848 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19452-977/.minikube/files for local assets ...
	I0815 16:32:35.875649    3848 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> 14982.pem in /etc/ssl/certs
	I0815 16:32:35.875656    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> /etc/ssl/certs/14982.pem
	I0815 16:32:35.875856    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 16:32:35.884005    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem --> /etc/ssl/certs/14982.pem (1708 bytes)
	I0815 16:32:35.903707    3848 start.go:296] duration metric: took 64.039683ms for postStartSetup
	I0815 16:32:35.903730    3848 main.go:141] libmachine: (ha-138000-m04) Calling .DriverName
	I0815 16:32:35.903903    3848 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0815 16:32:35.903917    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHHostname
	I0815 16:32:35.904012    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHPort
	I0815 16:32:35.904095    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:35.904168    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHUsername
	I0815 16:32:35.904243    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/id_rsa Username:docker}
	I0815 16:32:35.936201    3848 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0815 16:32:35.936261    3848 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0815 16:32:35.969821    3848 fix.go:56] duration metric: took 37.351909726s for fixHost
	I0815 16:32:35.969846    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHHostname
	I0815 16:32:35.969981    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHPort
	I0815 16:32:35.970066    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:35.970160    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:35.970248    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHUsername
	I0815 16:32:35.970357    3848 main.go:141] libmachine: Using SSH client type: native
	I0815 16:32:35.970503    3848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x23b3ea0] 0x23b6c00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0815 16:32:35.970511    3848 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 16:32:36.019594    3848 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723764755.882644542
	
	I0815 16:32:36.019607    3848 fix.go:216] guest clock: 1723764755.882644542
	I0815 16:32:36.019612    3848 fix.go:229] Guest: 2024-08-15 16:32:35.882644542 -0700 PDT Remote: 2024-08-15 16:32:35.969836 -0700 PDT m=+161.949888378 (delta=-87.191458ms)
	I0815 16:32:36.019628    3848 fix.go:200] guest clock delta is within tolerance: -87.191458ms
	I0815 16:32:36.019633    3848 start.go:83] releasing machines lock for "ha-138000-m04", held for 37.401695552s
	I0815 16:32:36.019652    3848 main.go:141] libmachine: (ha-138000-m04) Calling .DriverName
	I0815 16:32:36.019780    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetIP
	I0815 16:32:36.042030    3848 out.go:177] * Found network options:
	I0815 16:32:36.062147    3848 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	W0815 16:32:36.083026    3848 proxy.go:119] fail to check proxy env: Error ip not in block
	W0815 16:32:36.083070    3848 proxy.go:119] fail to check proxy env: Error ip not in block
	W0815 16:32:36.083084    3848 proxy.go:119] fail to check proxy env: Error ip not in block
	I0815 16:32:36.083102    3848 main.go:141] libmachine: (ha-138000-m04) Calling .DriverName
	I0815 16:32:36.083847    3848 main.go:141] libmachine: (ha-138000-m04) Calling .DriverName
	I0815 16:32:36.084058    3848 main.go:141] libmachine: (ha-138000-m04) Calling .DriverName
	I0815 16:32:36.084240    3848 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 16:32:36.084283    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHHostname
	W0815 16:32:36.084353    3848 proxy.go:119] fail to check proxy env: Error ip not in block
	W0815 16:32:36.084375    3848 proxy.go:119] fail to check proxy env: Error ip not in block
	W0815 16:32:36.084394    3848 proxy.go:119] fail to check proxy env: Error ip not in block
	I0815 16:32:36.084487    3848 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0815 16:32:36.084508    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHPort
	I0815 16:32:36.084519    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHHostname
	I0815 16:32:36.084733    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:36.084745    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHPort
	I0815 16:32:36.084957    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHUsername
	I0815 16:32:36.084992    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:32:36.085156    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHUsername
	I0815 16:32:36.085189    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/id_rsa Username:docker}
	I0815 16:32:36.085315    3848 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/id_rsa Username:docker}
	W0815 16:32:36.114740    3848 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 16:32:36.114803    3848 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 16:32:36.163124    3848 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 16:32:36.163145    3848 start.go:495] detecting cgroup driver to use...
	I0815 16:32:36.163258    3848 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 16:32:36.179534    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0815 16:32:36.187872    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0815 16:32:36.196474    3848 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0815 16:32:36.196528    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0815 16:32:36.204752    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 16:32:36.212948    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0815 16:32:36.221222    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 16:32:36.229511    3848 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 16:32:36.238142    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0815 16:32:36.246643    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0815 16:32:36.254862    3848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0815 16:32:36.263281    3848 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 16:32:36.270596    3848 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 16:32:36.278325    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:32:36.377803    3848 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0815 16:32:36.396329    3848 start.go:495] detecting cgroup driver to use...
	I0815 16:32:36.396399    3848 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0815 16:32:36.411192    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 16:32:36.423875    3848 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 16:32:36.437859    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 16:32:36.449142    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0815 16:32:36.460191    3848 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0815 16:32:36.479331    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0815 16:32:36.491179    3848 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 16:32:36.506341    3848 ssh_runner.go:195] Run: which cri-dockerd
	I0815 16:32:36.509156    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0815 16:32:36.517306    3848 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0815 16:32:36.530887    3848 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0815 16:32:36.631226    3848 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0815 16:32:36.742723    3848 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0815 16:32:36.742750    3848 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0815 16:32:36.756569    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:32:36.851332    3848 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0815 16:32:39.062024    3848 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.208594053s)
	I0815 16:32:39.062086    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0815 16:32:39.072858    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0815 16:32:39.083135    3848 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0815 16:32:39.180174    3848 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0815 16:32:39.296201    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:32:39.397264    3848 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0815 16:32:39.409768    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0815 16:32:39.419919    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:32:39.520172    3848 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0815 16:32:39.580712    3848 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0815 16:32:39.580787    3848 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0815 16:32:39.585172    3848 start.go:563] Will wait 60s for crictl version
	I0815 16:32:39.585233    3848 ssh_runner.go:195] Run: which crictl
	I0815 16:32:39.588436    3848 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 16:32:39.616400    3848 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0815 16:32:39.616480    3848 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0815 16:32:39.635416    3848 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0815 16:32:39.674509    3848 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0815 16:32:39.715170    3848 out.go:177]   - env NO_PROXY=192.169.0.5
	I0815 16:32:39.736207    3848 out.go:177]   - env NO_PROXY=192.169.0.5,192.169.0.6
	I0815 16:32:39.756990    3848 out.go:177]   - env NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	I0815 16:32:39.778125    3848 main.go:141] libmachine: (ha-138000-m04) Calling .GetIP
	I0815 16:32:39.778383    3848 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0815 16:32:39.781735    3848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 16:32:39.792335    3848 mustload.go:65] Loading cluster: ha-138000
	I0815 16:32:39.792518    3848 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:32:39.792754    3848 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:32:39.792777    3848 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:32:39.801573    3848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52333
	I0815 16:32:39.801892    3848 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:32:39.802227    3848 main.go:141] libmachine: Using API Version  1
	I0815 16:32:39.802235    3848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:32:39.802431    3848 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:32:39.802539    3848 main.go:141] libmachine: (ha-138000) Calling .GetState
	I0815 16:32:39.802617    3848 main.go:141] libmachine: (ha-138000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:32:39.802698    3848 main.go:141] libmachine: (ha-138000) DBG | hyperkit pid from json: 3862
	I0815 16:32:39.803669    3848 host.go:66] Checking if "ha-138000" exists ...
	I0815 16:32:39.803925    3848 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:32:39.803948    3848 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:32:39.812411    3848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52335
	I0815 16:32:39.812752    3848 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:32:39.813108    3848 main.go:141] libmachine: Using API Version  1
	I0815 16:32:39.813119    3848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:32:39.813352    3848 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:32:39.813479    3848 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:32:39.813578    3848 certs.go:68] Setting up /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000 for IP: 192.169.0.8
	I0815 16:32:39.813584    3848 certs.go:194] generating shared ca certs ...
	I0815 16:32:39.813595    3848 certs.go:226] acquiring lock for ca certs: {Name:mka1c88019f6064d2983ac988db71a67aeb65696 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:32:39.813775    3848 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.key
	I0815 16:32:39.813853    3848 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.key
	I0815 16:32:39.813863    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0815 16:32:39.813888    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0815 16:32:39.813907    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0815 16:32:39.813924    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0815 16:32:39.814032    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498.pem (1338 bytes)
	W0815 16:32:39.814088    3848 certs.go:480] ignoring /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498_empty.pem, impossibly tiny 0 bytes
	I0815 16:32:39.814098    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 16:32:39.814142    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/ca.pem (1082 bytes)
	I0815 16:32:39.814184    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/cert.pem (1123 bytes)
	I0815 16:32:39.814213    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/key.pem (1679 bytes)
	I0815 16:32:39.814289    3848 certs.go:484] found cert: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem (1708 bytes)
	I0815 16:32:39.814324    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:32:39.814344    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498.pem -> /usr/share/ca-certificates/1498.pem
	I0815 16:32:39.814362    3848 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem -> /usr/share/ca-certificates/14982.pem
	I0815 16:32:39.814393    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 16:32:39.834330    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 16:32:39.854069    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 16:32:39.873582    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 16:32:39.893143    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 16:32:39.912645    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/certs/1498.pem --> /usr/share/ca-certificates/1498.pem (1338 bytes)
	I0815 16:32:39.932104    3848 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/ssl/certs/14982.pem --> /usr/share/ca-certificates/14982.pem (1708 bytes)
	I0815 16:32:39.951872    3848 ssh_runner.go:195] Run: openssl version
	I0815 16:32:39.956296    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 16:32:39.966055    3848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:32:39.970287    3848 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:05 /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:32:39.970366    3848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 16:32:39.974984    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 16:32:39.984513    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1498.pem && ln -fs /usr/share/ca-certificates/1498.pem /etc/ssl/certs/1498.pem"
	I0815 16:32:39.994098    3848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1498.pem
	I0815 16:32:39.997571    3848 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 23:14 /usr/share/ca-certificates/1498.pem
	I0815 16:32:39.997641    3848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1498.pem
	I0815 16:32:40.002092    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1498.pem /etc/ssl/certs/51391683.0"
	I0815 16:32:40.011802    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14982.pem && ln -fs /usr/share/ca-certificates/14982.pem /etc/ssl/certs/14982.pem"
	I0815 16:32:40.021159    3848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14982.pem
	I0815 16:32:40.024904    3848 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 23:14 /usr/share/ca-certificates/14982.pem
	I0815 16:32:40.024948    3848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14982.pem
	I0815 16:32:40.029236    3848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14982.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 16:32:40.038952    3848 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 16:32:40.042186    3848 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0815 16:32:40.042220    3848 kubeadm.go:934] updating node {m04 192.169.0.8 0 v1.31.0 docker false true} ...
	I0815 16:32:40.042279    3848 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-138000-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.8
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-138000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 16:32:40.042327    3848 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 16:32:40.050823    3848 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 16:32:40.050877    3848 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0815 16:32:40.059254    3848 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0815 16:32:40.072800    3848 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 16:32:40.086506    3848 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0815 16:32:40.089484    3848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 16:32:40.099835    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:32:40.204428    3848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 16:32:40.219160    3848 start.go:235] Will wait 6m0s for node &{Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0815 16:32:40.219362    3848 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:32:40.240563    3848 out.go:177] * Verifying Kubernetes components...
	I0815 16:32:40.281239    3848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 16:32:40.407726    3848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 16:32:40.424517    3848 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19452-977/kubeconfig
	I0815 16:32:40.424746    3848 kapi.go:59] client config for ha-138000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19452-977/.minikube/profiles/ha-138000/client.key", CAFile:"/Users/jenkins/minikube-integration/19452-977/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, U
serAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x3a6cf60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0815 16:32:40.424790    3848 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0815 16:32:40.424946    3848 node_ready.go:35] waiting up to 6m0s for node "ha-138000-m04" to be "Ready" ...
	I0815 16:32:40.424985    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m04
	I0815 16:32:40.424990    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:40.424997    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:40.425001    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:40.429695    3848 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 16:32:40.925699    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m04
	I0815 16:32:40.925718    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:40.925730    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:40.925735    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:40.928643    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:32:40.929158    3848 node_ready.go:49] node "ha-138000-m04" has status "Ready":"True"
	I0815 16:32:40.929170    3848 node_ready.go:38] duration metric: took 503.811986ms for node "ha-138000-m04" to be "Ready" ...
	I0815 16:32:40.929177    3848 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 16:32:40.929232    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0815 16:32:40.929240    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:40.929248    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:40.929253    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:40.932889    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:40.938534    3848 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-dmgt5" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:40.938586    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-dmgt5
	I0815 16:32:40.938591    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:40.938597    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:40.938601    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:40.940630    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:32:40.941135    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:32:40.941143    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:40.941149    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:40.941155    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:40.943092    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:32:40.943437    3848 pod_ready.go:93] pod "coredns-6f6b679f8f-dmgt5" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:40.943446    3848 pod_ready.go:82] duration metric: took 4.897461ms for pod "coredns-6f6b679f8f-dmgt5" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:40.943453    3848 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-zc8jj" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:40.943484    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-zc8jj
	I0815 16:32:40.943489    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:40.943495    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:40.943498    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:40.945206    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:32:40.945690    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:32:40.945697    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:40.945703    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:40.945706    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:40.947257    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:32:40.947557    3848 pod_ready.go:93] pod "coredns-6f6b679f8f-zc8jj" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:40.947566    3848 pod_ready.go:82] duration metric: took 4.10464ms for pod "coredns-6f6b679f8f-zc8jj" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:40.947580    3848 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:40.947611    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000
	I0815 16:32:40.947616    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:40.947622    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:40.947625    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:40.949227    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:32:40.949563    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:32:40.949570    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:40.949576    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:40.949579    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:40.951175    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:32:40.951528    3848 pod_ready.go:93] pod "etcd-ha-138000" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:40.951537    3848 pod_ready.go:82] duration metric: took 3.9487ms for pod "etcd-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:40.951543    3848 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:40.951576    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m02
	I0815 16:32:40.951581    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:40.951587    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:40.951590    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:40.953480    3848 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 16:32:40.953888    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:32:40.953896    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:40.953902    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:40.953906    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:40.956234    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:32:40.956704    3848 pod_ready.go:93] pod "etcd-ha-138000-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:40.956713    3848 pod_ready.go:82] duration metric: took 5.161406ms for pod "etcd-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:40.956719    3848 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:41.126239    3848 request.go:632] Waited for 169.295221ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:32:41.126310    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-138000-m03
	I0815 16:32:41.126326    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:41.126342    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:41.126348    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:41.129984    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:41.327227    3848 request.go:632] Waited for 196.482674ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:32:41.327282    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:32:41.327327    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:41.327340    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:41.327346    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:41.330300    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:32:41.330659    3848 pod_ready.go:93] pod "etcd-ha-138000-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:41.330669    3848 pod_ready.go:82] duration metric: took 373.660924ms for pod "etcd-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:41.330681    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:41.526448    3848 request.go:632] Waited for 195.583591ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000
	I0815 16:32:41.526543    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000
	I0815 16:32:41.526554    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:41.526567    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:41.526577    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:41.532016    3848 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 16:32:41.726373    3848 request.go:632] Waited for 193.637616ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:32:41.726406    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:32:41.726411    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:41.726417    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:41.726421    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:41.728634    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:32:41.729100    3848 pod_ready.go:93] pod "kube-apiserver-ha-138000" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:41.729111    3848 pod_ready.go:82] duration metric: took 398.123683ms for pod "kube-apiserver-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:41.729118    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:41.926911    3848 request.go:632] Waited for 197.603818ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:32:41.927000    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m02
	I0815 16:32:41.927007    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:41.927013    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:41.927017    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:41.929844    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:32:42.128208    3848 request.go:632] Waited for 197.600405ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:32:42.128281    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:32:42.128287    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:42.128294    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:42.128297    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:42.130511    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:32:42.130893    3848 pod_ready.go:93] pod "kube-apiserver-ha-138000-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:42.130903    3848 pod_ready.go:82] duration metric: took 401.488989ms for pod "kube-apiserver-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:42.130910    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:42.326992    3848 request.go:632] Waited for 195.89771ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m03
	I0815 16:32:42.327104    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-138000-m03
	I0815 16:32:42.327117    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:42.327128    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:42.327133    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:42.330012    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:32:42.528721    3848 request.go:632] Waited for 197.972621ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:32:42.528810    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:32:42.528823    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:42.528832    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:42.528839    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:42.531660    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:32:42.532014    3848 pod_ready.go:93] pod "kube-apiserver-ha-138000-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:42.532023    3848 pod_ready.go:82] duration metric: took 400.824225ms for pod "kube-apiserver-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:42.532031    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:42.728571    3848 request.go:632] Waited for 196.361424ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:32:42.728605    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000
	I0815 16:32:42.728614    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:42.728647    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:42.728651    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:42.731003    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:32:42.928382    3848 request.go:632] Waited for 196.815945ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:32:42.928456    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:32:42.928464    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:42.928472    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:42.928479    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:42.930971    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:32:42.931316    3848 pod_ready.go:93] pod "kube-controller-manager-ha-138000" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:42.931325    3848 pod_ready.go:82] duration metric: took 399.007322ms for pod "kube-controller-manager-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:42.931332    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:43.127763    3848 request.go:632] Waited for 196.250954ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000-m02
	I0815 16:32:43.127817    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000-m02
	I0815 16:32:43.127830    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:43.127894    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:43.127907    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:43.131065    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:43.327999    3848 request.go:632] Waited for 196.235394ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:32:43.328052    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:32:43.328063    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:43.328073    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:43.328081    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:43.331302    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:43.331997    3848 pod_ready.go:93] pod "kube-controller-manager-ha-138000-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:43.332007    3848 pod_ready.go:82] duration metric: took 400.403262ms for pod "kube-controller-manager-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:43.332014    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:43.527716    3848 request.go:632] Waited for 195.527377ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000-m03
	I0815 16:32:43.527817    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-138000-m03
	I0815 16:32:43.527829    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:43.527841    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:43.527847    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:43.530965    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:43.728236    3848 request.go:632] Waited for 196.484633ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:32:43.728298    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:32:43.728309    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:43.728320    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:43.728328    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:43.731883    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:43.732469    3848 pod_ready.go:93] pod "kube-controller-manager-ha-138000-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:43.732478    3848 pod_ready.go:82] duration metric: took 400.192656ms for pod "kube-controller-manager-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:43.732484    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cznkn" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:43.928265    3848 request.go:632] Waited for 195.61986ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cznkn
	I0815 16:32:43.928325    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cznkn
	I0815 16:32:43.928331    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:43.928337    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:43.928341    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:43.930546    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:32:44.128606    3848 request.go:632] Waited for 197.39717ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:32:44.128669    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:32:44.128682    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:44.128693    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:44.128702    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:44.132274    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:44.132835    3848 pod_ready.go:93] pod "kube-proxy-cznkn" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:44.132847    3848 pod_ready.go:82] duration metric: took 400.10235ms for pod "kube-proxy-cznkn" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:44.132856    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kxghx" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:44.328927    3848 request.go:632] Waited for 195.898781ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kxghx
	I0815 16:32:44.328980    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kxghx
	I0815 16:32:44.328988    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:44.328997    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:44.329003    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:44.332425    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:44.528721    3848 request.go:632] Waited for 195.542417ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:32:44.528856    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:32:44.528867    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:44.528878    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:44.528884    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:44.532391    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:44.532921    3848 pod_ready.go:93] pod "kube-proxy-kxghx" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:44.532933    3848 pod_ready.go:82] duration metric: took 399.821933ms for pod "kube-proxy-kxghx" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:44.532943    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qpth7" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:44.729675    3848 request.go:632] Waited for 196.549445ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qpth7
	I0815 16:32:44.729804    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qpth7
	I0815 16:32:44.729823    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:44.729835    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:44.729845    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:44.733406    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:44.929790    3848 request.go:632] Waited for 195.811353ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m04
	I0815 16:32:44.929844    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m04
	I0815 16:32:44.929899    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:44.929913    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:44.929919    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:44.933124    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:44.933608    3848 pod_ready.go:93] pod "kube-proxy-qpth7" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:44.933620    3848 pod_ready.go:82] duration metric: took 400.423483ms for pod "kube-proxy-qpth7" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:44.933628    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tf79g" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:45.129188    3848 request.go:632] Waited for 195.397689ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tf79g
	I0815 16:32:45.129249    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tf79g
	I0815 16:32:45.129265    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:45.129278    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:45.129288    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:45.132523    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:45.329740    3848 request.go:632] Waited for 196.543831ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:32:45.329842    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:32:45.329853    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:45.329864    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:45.329893    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:45.332959    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:45.333655    3848 pod_ready.go:93] pod "kube-proxy-tf79g" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:45.333668    3848 pod_ready.go:82] duration metric: took 399.799233ms for pod "kube-proxy-tf79g" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:45.333677    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:45.528959    3848 request.go:632] Waited for 195.085989ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000
	I0815 16:32:45.528999    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000
	I0815 16:32:45.529004    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:45.529011    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:45.529014    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:45.531204    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:32:45.730380    3848 request.go:632] Waited for 198.71096ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:32:45.730470    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000
	I0815 16:32:45.730488    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:45.730540    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:45.730549    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:45.733632    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:45.734206    3848 pod_ready.go:93] pod "kube-scheduler-ha-138000" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:45.734218    3848 pod_ready.go:82] duration metric: took 400.300105ms for pod "kube-scheduler-ha-138000" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:45.734227    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:45.929618    3848 request.go:632] Waited for 195.186999ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000-m02
	I0815 16:32:45.929667    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000-m02
	I0815 16:32:45.929676    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:45.929687    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:45.929695    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:45.933262    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:46.130161    3848 request.go:632] Waited for 196.149607ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:32:46.130227    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m02
	I0815 16:32:46.130233    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:46.130239    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:46.130243    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:46.132556    3848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 16:32:46.132872    3848 pod_ready.go:93] pod "kube-scheduler-ha-138000-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:46.132882    3848 pod_ready.go:82] duration metric: took 398.424946ms for pod "kube-scheduler-ha-138000-m02" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:46.132892    3848 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:46.330062    3848 request.go:632] Waited for 196.982598ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000-m03
	I0815 16:32:46.330155    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-138000-m03
	I0815 16:32:46.330165    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:46.330189    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:46.330198    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:46.333748    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:46.529626    3848 request.go:632] Waited for 195.297916ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:32:46.529687    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-138000-m03
	I0815 16:32:46.529698    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:46.529709    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:46.529716    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:46.532896    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:46.533425    3848 pod_ready.go:93] pod "kube-scheduler-ha-138000-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 16:32:46.533437    3848 pod_ready.go:82] duration metric: took 400.316472ms for pod "kube-scheduler-ha-138000-m03" in "kube-system" namespace to be "Ready" ...
	I0815 16:32:46.533445    3848 pod_ready.go:39] duration metric: took 5.600601602s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 16:32:46.533458    3848 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 16:32:46.533512    3848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 16:32:46.545338    3848 system_svc.go:56] duration metric: took 11.868784ms WaitForService to wait for kubelet
	I0815 16:32:46.545353    3848 kubeadm.go:582] duration metric: took 6.321930293s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 16:32:46.545367    3848 node_conditions.go:102] verifying NodePressure condition ...
	I0815 16:32:46.729678    3848 request.go:632] Waited for 184.161888ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0815 16:32:46.729775    3848 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0815 16:32:46.729791    3848 round_trippers.go:469] Request Headers:
	I0815 16:32:46.729803    3848 round_trippers.go:473]     Accept: application/json, */*
	I0815 16:32:46.729814    3848 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0815 16:32:46.733356    3848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 16:32:46.734408    3848 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 16:32:46.734417    3848 node_conditions.go:123] node cpu capacity is 2
	I0815 16:32:46.734438    3848 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 16:32:46.734446    3848 node_conditions.go:123] node cpu capacity is 2
	I0815 16:32:46.734451    3848 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 16:32:46.734454    3848 node_conditions.go:123] node cpu capacity is 2
	I0815 16:32:46.734459    3848 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 16:32:46.734463    3848 node_conditions.go:123] node cpu capacity is 2
	I0815 16:32:46.734466    3848 node_conditions.go:105] duration metric: took 188.991963ms to run NodePressure ...
	I0815 16:32:46.734473    3848 start.go:241] waiting for startup goroutines ...
	I0815 16:32:46.734487    3848 start.go:255] writing updated cluster config ...
	I0815 16:32:46.734849    3848 ssh_runner.go:195] Run: rm -f paused
	I0815 16:32:46.777324    3848 start.go:600] kubectl: 1.29.2, cluster: 1.31.0 (minor skew: 2)
	I0815 16:32:46.799308    3848 out.go:201] 
	W0815 16:32:46.820067    3848 out.go:270] ! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.0.
	I0815 16:32:46.840863    3848 out.go:177]   - Want kubectl v1.31.0? Try 'minikube kubectl -- get pods -A'
	I0815 16:32:46.862128    3848 out.go:177] * Done! kubectl is now configured to use "ha-138000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Aug 15 23:30:58 ha-138000 dockerd[1153]: time="2024-08-15T23:30:58.911495531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:30:58 ha-138000 dockerd[1153]: time="2024-08-15T23:30:58.913627850Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 15 23:30:58 ha-138000 dockerd[1153]: time="2024-08-15T23:30:58.913666039Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 15 23:30:58 ha-138000 dockerd[1153]: time="2024-08-15T23:30:58.913677629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:30:58 ha-138000 dockerd[1153]: time="2024-08-15T23:30:58.913771765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:30:58 ha-138000 dockerd[1153]: time="2024-08-15T23:30:58.917066694Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 15 23:30:58 ha-138000 dockerd[1153]: time="2024-08-15T23:30:58.917195390Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 15 23:30:58 ha-138000 dockerd[1153]: time="2024-08-15T23:30:58.917208298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:30:58 ha-138000 dockerd[1153]: time="2024-08-15T23:30:58.917385910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:30:59 ha-138000 dockerd[1153]: time="2024-08-15T23:30:59.886428053Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 15 23:30:59 ha-138000 dockerd[1153]: time="2024-08-15T23:30:59.886532806Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 15 23:30:59 ha-138000 dockerd[1153]: time="2024-08-15T23:30:59.886546833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:30:59 ha-138000 dockerd[1153]: time="2024-08-15T23:30:59.886748891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:31:00 ha-138000 dockerd[1153]: time="2024-08-15T23:31:00.892633352Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 15 23:31:00 ha-138000 dockerd[1153]: time="2024-08-15T23:31:00.893116347Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 15 23:31:00 ha-138000 dockerd[1153]: time="2024-08-15T23:31:00.893221469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:31:00 ha-138000 dockerd[1153]: time="2024-08-15T23:31:00.893411350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:31:01 ha-138000 dockerd[1153]: time="2024-08-15T23:31:01.876748430Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 15 23:31:01 ha-138000 dockerd[1153]: time="2024-08-15T23:31:01.876814366Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 15 23:31:01 ha-138000 dockerd[1153]: time="2024-08-15T23:31:01.876834716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:31:01 ha-138000 dockerd[1153]: time="2024-08-15T23:31:01.876961405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:31:03 ha-138000 dockerd[1153]: time="2024-08-15T23:31:03.874516614Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 15 23:31:03 ha-138000 dockerd[1153]: time="2024-08-15T23:31:03.874614005Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 15 23:31:03 ha-138000 dockerd[1153]: time="2024-08-15T23:31:03.874643416Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 23:31:03 ha-138000 dockerd[1153]: time="2024-08-15T23:31:03.874757663Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f4a0ec142726f       045733566833c                                                                                         3 minutes ago       Running             kube-controller-manager   7                   787273cdcffa4       kube-controller-manager-ha-138000
	9b4d9e684266a       cbb01a7bd410d                                                                                         3 minutes ago       Running             coredns                   1                   e616bc4c74358       coredns-6f6b679f8f-dmgt5
	80f5762ff7596       8c811b4aec35f                                                                                         3 minutes ago       Running             busybox                   1                   67d12a31b7b49       busybox-7dff88458-wgww9
	fea7f52d9a276       6e38f40d628db                                                                                         3 minutes ago       Running             storage-provisioner       1                   b65d03e28df57       storage-provisioner
	a06770ea62d50       cbb01a7bd410d                                                                                         3 minutes ago       Running             coredns                   1                   730316cfbee9c       coredns-6f6b679f8f-zc8jj
	3102e608c7d69       ad83b2ca7b09e                                                                                         3 minutes ago       Running             kube-proxy                1                   824e79b38bfeb       kube-proxy-cznkn
	d35ee43272703       12968670680f4                                                                                         3 minutes ago       Running             kindnet-cni               1                   28b2ff94764c2       kindnet-77dc6
	67b207257b40d       2e96e5913fc06                                                                                         4 minutes ago       Running             etcd                      3                   5fbdeb5e7a6b9       etcd-ha-138000
	c2ddb52a9846f       1766f54c897f0                                                                                         4 minutes ago       Running             kube-scheduler            2                   d5e3465359549       kube-scheduler-ha-138000
	2d2c6da6f7b74       38af8ddebf499                                                                                         4 minutes ago       Running             kube-vip                  1                   2bb58ad8c8f10       kube-vip-ha-138000
	2ed9ae0427266       045733566833c                                                                                         4 minutes ago       Exited              kube-controller-manager   6                   787273cdcffa4       kube-controller-manager-ha-138000
	a6baf6e21d6c9       604f5db92eaa8                                                                                         4 minutes ago       Running             kube-apiserver            6                   0de6d71d60938       kube-apiserver-ha-138000
	5ed11c46e0eb7       604f5db92eaa8                                                                                         4 minutes ago       Exited              kube-apiserver            5                   7152268f8eec4       kube-apiserver-ha-138000
	59dac0b44544a       2e96e5913fc06                                                                                         5 minutes ago       Exited              etcd                      2                   ec285d4826baa       etcd-ha-138000
	efbc09be8eda5       38af8ddebf499                                                                                         9 minutes ago       Exited              kube-vip                  0                   0c665afd15e6f       kube-vip-ha-138000
	ac6935271595c       1766f54c897f0                                                                                         9 minutes ago       Exited              kube-scheduler            1                   07c1c62e41d3a       kube-scheduler-ha-138000
	8f20284cd3969       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   12 minutes ago      Exited              busybox                   0                   bfc975a528b9e       busybox-7dff88458-wgww9
	42f5d82b00417       cbb01a7bd410d                                                                                         14 minutes ago      Exited              coredns                   0                   10891f8fbffcc       coredns-6f6b679f8f-dmgt5
	3e8b806ef4f33       cbb01a7bd410d                                                                                         14 minutes ago      Exited              coredns                   0                   096ab15603b01       coredns-6f6b679f8f-zc8jj
	6a1122913bb18       6e38f40d628db                                                                                         14 minutes ago      Exited              storage-provisioner       0                   e30dde4a5a10d       storage-provisioner
	c2a16126718b3       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              14 minutes ago      Exited              kindnet-cni               0                   e260a94a203af       kindnet-77dc6
	fc2e141007efb       ad83b2ca7b09e                                                                                         14 minutes ago      Exited              kube-proxy                0                   5b40cdd6b2c24       kube-proxy-cznkn
	
	
	==> coredns [3e8b806ef4f3] <==
	[INFO] 10.244.2.2:44773 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000075522s
	[INFO] 10.244.2.2:53805 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000098349s
	[INFO] 10.244.2.2:34369 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000122495s
	[INFO] 10.244.0.4:59671 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000077646s
	[INFO] 10.244.0.4:41185 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000079139s
	[INFO] 10.244.0.4:42405 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000092065s
	[INFO] 10.244.0.4:54373 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000049998s
	[INFO] 10.244.0.4:57169 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000050383s
	[INFO] 10.244.0.4:37825 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000085108s
	[INFO] 10.244.1.2:59685 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000072268s
	[INFO] 10.244.1.2:32923 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073054s
	[INFO] 10.244.2.2:50876 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000068102s
	[INFO] 10.244.2.2:54719 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000762s
	[INFO] 10.244.0.4:57395 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000091608s
	[INFO] 10.244.0.4:37936 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000031052s
	[INFO] 10.244.1.2:58408 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000088888s
	[INFO] 10.244.1.2:42731 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000114857s
	[INFO] 10.244.1.2:41638 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000082664s
	[INFO] 10.244.2.2:52666 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000092331s
	[INFO] 10.244.2.2:41501 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000093116s
	[INFO] 10.244.0.4:48200 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000075447s
	[INFO] 10.244.0.4:35056 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000091854s
	[INFO] 10.244.0.4:36257 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000057922s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [42f5d82b0041] <==
	[INFO] 10.244.1.2:50104 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.009876264s
	[INFO] 10.244.0.4:33653 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115506s
	[INFO] 10.244.0.4:45180 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000042438s
	[INFO] 10.244.1.2:60312 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000068925s
	[INFO] 10.244.1.2:38521 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000124425s
	[INFO] 10.244.1.2:51675 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000125646s
	[INFO] 10.244.1.2:33974 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000078827s
	[INFO] 10.244.2.2:38966 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000078816s
	[INFO] 10.244.2.2:56056 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000620092s
	[INFO] 10.244.2.2:32787 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000109221s
	[INFO] 10.244.2.2:55701 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000039601s
	[INFO] 10.244.0.4:52543 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000083971s
	[INFO] 10.244.0.4:55050 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000146353s
	[INFO] 10.244.1.2:52165 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100415s
	[INFO] 10.244.1.2:41123 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000060755s
	[INFO] 10.244.2.2:56460 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000087503s
	[INFO] 10.244.2.2:36407 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00009778s
	[INFO] 10.244.0.4:40764 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000037536s
	[INFO] 10.244.0.4:58473 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000029335s
	[INFO] 10.244.1.2:38640 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000118481s
	[INFO] 10.244.2.2:46151 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000117088s
	[INFO] 10.244.2.2:34054 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000108858s
	[INFO] 10.244.0.4:56735 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000069666s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9b4d9e684266] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:35767 - 22561 "HINFO IN 7004530829965964013.1750022571380345519. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.015451267s
	
	
	==> coredns [a06770ea62d5] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:45363 - 12851 "HINFO IN 3106403090745602942.3481725171230015744. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.010450605s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[254954895]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (15-Aug-2024 23:30:59.263) (total time: 30001ms):
	Trace[254954895]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (23:31:29.264)
	Trace[254954895]: [30.001669104s] [30.001669104s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1581349608]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (15-Aug-2024 23:30:59.262) (total time: 30003ms):
	Trace[1581349608]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (23:31:29.264)
	Trace[1581349608]: [30.003336626s] [30.003336626s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[405473182]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (15-Aug-2024 23:30:59.265) (total time: 30001ms):
	Trace[405473182]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (23:31:29.266)
	Trace[405473182]: [30.001211712s] [30.001211712s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               ha-138000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-138000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774
	                    minikube.k8s.io/name=ha-138000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_15T16_19_29_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 23:19:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-138000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 23:34:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 23:30:42 +0000   Thu, 15 Aug 2024 23:19:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 23:30:42 +0000   Thu, 15 Aug 2024 23:19:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 23:30:42 +0000   Thu, 15 Aug 2024 23:19:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 23:30:42 +0000   Thu, 15 Aug 2024 23:30:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-138000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 92a77083c2c148ceb3a6c27974611a44
	  System UUID:                bf1b4c04-0000-0000-a028-0dd0a6dcd337
	  Boot ID:                    0c496489-3552-4f3e-814f-62743ebab1dd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-wgww9              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-6f6b679f8f-dmgt5             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 coredns-6f6b679f8f-zc8jj             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 etcd-ha-138000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         14m
	  kube-system                 kindnet-77dc6                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      14m
	  kube-system                 kube-apiserver-ha-138000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-138000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-cznkn                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-138000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-138000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m37s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m20s                kube-proxy       
	  Normal  Starting                 14m                  kube-proxy       
	  Normal  NodeHasSufficientPID     14m                  kubelet          Node ha-138000 status is now: NodeHasSufficientPID
	  Normal  Starting                 14m                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                  kubelet          Node ha-138000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                  kubelet          Node ha-138000 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           14m                  node-controller  Node ha-138000 event: Registered Node ha-138000 in Controller
	  Normal  NodeReady                14m                  kubelet          Node ha-138000 status is now: NodeReady
	  Normal  RegisteredNode           13m                  node-controller  Node ha-138000 event: Registered Node ha-138000 in Controller
	  Normal  RegisteredNode           12m                  node-controller  Node ha-138000 event: Registered Node ha-138000 in Controller
	  Normal  RegisteredNode           10m                  node-controller  Node ha-138000 event: Registered Node ha-138000 in Controller
	  Normal  Starting                 4m8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m8s (x8 over 4m8s)  kubelet          Node ha-138000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m8s (x8 over 4m8s)  kubelet          Node ha-138000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m8s (x7 over 4m8s)  kubelet          Node ha-138000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m36s                node-controller  Node ha-138000 event: Registered Node ha-138000 in Controller
	  Normal  RegisteredNode           3m14s                node-controller  Node ha-138000 event: Registered Node ha-138000 in Controller
	  Normal  RegisteredNode           2m36s                node-controller  Node ha-138000 event: Registered Node ha-138000 in Controller
	  Normal  RegisteredNode           28s                  node-controller  Node ha-138000 event: Registered Node ha-138000 in Controller
	
	
	Name:               ha-138000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-138000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774
	                    minikube.k8s.io/name=ha-138000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_15T16_20_25_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 23:20:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-138000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 23:34:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 23:30:40 +0000   Thu, 15 Aug 2024 23:20:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 23:30:40 +0000   Thu, 15 Aug 2024 23:20:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 23:30:40 +0000   Thu, 15 Aug 2024 23:20:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 23:30:40 +0000   Thu, 15 Aug 2024 23:20:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.6
	  Hostname:    ha-138000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 f9fb9b8d5e3646d78c1f55449a26b188
	  System UUID:                4cff4215-0000-0000-9139-05f05b79bce3
	  Boot ID:                    26a8e1bf-75d0-4caa-b86c-d0e6f8c9e474
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-s6zqd                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-ha-138000-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         13m
	  kube-system                 kindnet-z6mnx                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-apiserver-ha-138000-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-138000-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-tf79g                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-138000-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-138000-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   Starting                 3m38s                  kube-proxy       
	  Normal   Starting                 10m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  13m (x8 over 13m)      kubelet          Node ha-138000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x8 over 13m)      kubelet          Node ha-138000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x7 over 13m)      kubelet          Node ha-138000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           13m                    node-controller  Node ha-138000-m02 event: Registered Node ha-138000-m02 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-138000-m02 event: Registered Node ha-138000-m02 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-138000-m02 event: Registered Node ha-138000-m02 in Controller
	  Normal   NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 10m                    kubelet          Starting kubelet.
	  Warning  Rebooted                 10m                    kubelet          Node ha-138000-m02 has been rebooted, boot id: 8d4ef345-e3b6-437d-95f7-338233576a37
	  Normal   NodeHasSufficientMemory  10m                    kubelet          Node ha-138000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m                    kubelet          Node ha-138000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m                    kubelet          Node ha-138000-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                    node-controller  Node ha-138000-m02 event: Registered Node ha-138000-m02 in Controller
	  Normal   Starting                 3m49s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  3m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    3m48s (x8 over 3m49s)  kubelet          Node ha-138000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  3m48s (x8 over 3m49s)  kubelet          Node ha-138000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     3m48s (x7 over 3m49s)  kubelet          Node ha-138000-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3m36s                  node-controller  Node ha-138000-m02 event: Registered Node ha-138000-m02 in Controller
	  Normal   RegisteredNode           3m14s                  node-controller  Node ha-138000-m02 event: Registered Node ha-138000-m02 in Controller
	  Normal   RegisteredNode           2m36s                  node-controller  Node ha-138000-m02 event: Registered Node ha-138000-m02 in Controller
	  Normal   RegisteredNode           28s                    node-controller  Node ha-138000-m02 event: Registered Node ha-138000-m02 in Controller
	
	
	Name:               ha-138000-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-138000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774
	                    minikube.k8s.io/name=ha-138000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_15T16_21_41_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 23:21:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-138000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 23:34:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 23:31:37 +0000   Thu, 15 Aug 2024 23:31:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 23:31:37 +0000   Thu, 15 Aug 2024 23:31:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 23:31:37 +0000   Thu, 15 Aug 2024 23:31:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 23:31:37 +0000   Thu, 15 Aug 2024 23:31:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.7
	  Hostname:    ha-138000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 a589cb93968b432caa5fc365bb995740
	  System UUID:                42284b8b-0000-0000-ac7c-129bf380703a
	  Boot ID:                    3cf0bc98-5f0e-4a33-80fb-e0c2d84cf3db
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-t5sdh                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-ha-138000-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         12m
	  kube-system                 kindnet-dsvxt                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-apiserver-ha-138000-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-138000-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-kxghx                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-138000-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-138000-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m39s                  kube-proxy       
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node ha-138000-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-138000-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-138000-m03 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           12m                    node-controller  Node ha-138000-m03 event: Registered Node ha-138000-m03 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-138000-m03 event: Registered Node ha-138000-m03 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-138000-m03 event: Registered Node ha-138000-m03 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-138000-m03 event: Registered Node ha-138000-m03 in Controller
	  Normal   RegisteredNode           3m36s                  node-controller  Node ha-138000-m03 event: Registered Node ha-138000-m03 in Controller
	  Normal   RegisteredNode           3m14s                  node-controller  Node ha-138000-m03 event: Registered Node ha-138000-m03 in Controller
	  Normal   NodeNotReady             2m56s                  node-controller  Node ha-138000-m03 status is now: NodeNotReady
	  Normal   Starting                 2m43s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m43s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m43s (x3 over 2m43s)  kubelet          Node ha-138000-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m43s (x3 over 2m43s)  kubelet          Node ha-138000-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m43s (x3 over 2m43s)  kubelet          Node ha-138000-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m43s (x2 over 2m43s)  kubelet          Node ha-138000-m03 has been rebooted, boot id: 3cf0bc98-5f0e-4a33-80fb-e0c2d84cf3db
	  Normal   NodeReady                2m43s (x2 over 2m43s)  kubelet          Node ha-138000-m03 status is now: NodeReady
	  Normal   RegisteredNode           2m36s                  node-controller  Node ha-138000-m03 event: Registered Node ha-138000-m03 in Controller
	  Normal   RegisteredNode           28s                    node-controller  Node ha-138000-m03 event: Registered Node ha-138000-m03 in Controller
	
	
	Name:               ha-138000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-138000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774
	                    minikube.k8s.io/name=ha-138000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_15T16_22_36_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 23:22:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-138000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 23:34:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 23:32:40 +0000   Thu, 15 Aug 2024 23:32:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 23:32:40 +0000   Thu, 15 Aug 2024 23:32:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 23:32:40 +0000   Thu, 15 Aug 2024 23:32:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 23:32:40 +0000   Thu, 15 Aug 2024 23:32:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.8
	  Hostname:    ha-138000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 f4edcad8d76a442b9919d65bbd5ebb03
	  System UUID:                e49846a0-0000-0000-a846-8a8b2da04ea9
	  Boot ID:                    7d49d130-2f84-43a9-9c3e-7a69f44367c4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-m887r       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      11m
	  kube-system                 kube-proxy-qpth7    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 11m                  kube-proxy       
	  Normal   Starting                 98s                  kube-proxy       
	  Normal   NodeAllocatableEnforced  11m                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     11m (x2 over 11m)    kubelet          Node ha-138000-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    11m (x2 over 11m)    kubelet          Node ha-138000-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  11m (x2 over 11m)    kubelet          Node ha-138000-m04 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           11m                  node-controller  Node ha-138000-m04 event: Registered Node ha-138000-m04 in Controller
	  Normal   RegisteredNode           11m                  node-controller  Node ha-138000-m04 event: Registered Node ha-138000-m04 in Controller
	  Normal   RegisteredNode           11m                  node-controller  Node ha-138000-m04 event: Registered Node ha-138000-m04 in Controller
	  Normal   NodeReady                11m                  kubelet          Node ha-138000-m04 status is now: NodeReady
	  Normal   RegisteredNode           10m                  node-controller  Node ha-138000-m04 event: Registered Node ha-138000-m04 in Controller
	  Normal   RegisteredNode           3m36s                node-controller  Node ha-138000-m04 event: Registered Node ha-138000-m04 in Controller
	  Normal   RegisteredNode           3m14s                node-controller  Node ha-138000-m04 event: Registered Node ha-138000-m04 in Controller
	  Normal   NodeNotReady             2m56s                node-controller  Node ha-138000-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           2m36s                node-controller  Node ha-138000-m04 event: Registered Node ha-138000-m04 in Controller
	  Normal   Starting                 100s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  100s                 kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 100s (x3 over 100s)  kubelet          Node ha-138000-m04 has been rebooted, boot id: 7d49d130-2f84-43a9-9c3e-7a69f44367c4
	  Normal   NodeHasSufficientMemory  100s (x4 over 100s)  kubelet          Node ha-138000-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    100s (x4 over 100s)  kubelet          Node ha-138000-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     100s (x4 over 100s)  kubelet          Node ha-138000-m04 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             100s                 kubelet          Node ha-138000-m04 status is now: NodeNotReady
	  Normal   NodeReady                100s (x2 over 100s)  kubelet          Node ha-138000-m04 status is now: NodeReady
	  Normal   RegisteredNode           28s                  node-controller  Node ha-138000-m04 event: Registered Node ha-138000-m04 in Controller
	
	
	Name:               ha-138000-m05
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-138000-m05
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774
	                    minikube.k8s.io/name=ha-138000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_15T16_33_47_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 23:33:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-138000-m05
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 23:34:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 23:34:15 +0000   Thu, 15 Aug 2024 23:33:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 23:34:15 +0000   Thu, 15 Aug 2024 23:33:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 23:34:15 +0000   Thu, 15 Aug 2024 23:33:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 23:34:15 +0000   Thu, 15 Aug 2024 23:34:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.9
	  Hostname:    ha-138000-m05
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 f3dfa27002394276aaf2f2145134003c
	  System UUID:                6c6a4a36-0000-0000-8ab7-05c1068f3e22
	  Boot ID:                    9c557ac2-9d33-48bd-8957-21ce53b8339d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-ha-138000-m05                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         34s
	  kube-system                 kindnet-qdhwz                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      36s
	  kube-system                 kube-apiserver-ha-138000-m05             250m (12%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-ha-138000-m05    200m (10%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-dwbgv                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-scheduler-ha-138000-m05             100m (5%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-vip-ha-138000-m05                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 31s                kube-proxy       
	  Normal  RegisteredNode           36s                node-controller  Node ha-138000-m05 event: Registered Node ha-138000-m05 in Controller
	  Normal  NodeHasSufficientMemory  36s (x8 over 36s)  kubelet          Node ha-138000-m05 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36s (x8 over 36s)  kubelet          Node ha-138000-m05 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36s (x7 over 36s)  kubelet          Node ha-138000-m05 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  36s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           34s                node-controller  Node ha-138000-m05 event: Registered Node ha-138000-m05 in Controller
	  Normal  RegisteredNode           31s                node-controller  Node ha-138000-m05 event: Registered Node ha-138000-m05 in Controller
	  Normal  RegisteredNode           28s                node-controller  Node ha-138000-m05 event: Registered Node ha-138000-m05 in Controller
	
	
	==> dmesg <==
	[  +0.035773] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.007968] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.680855] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000003] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.006866] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Aug15 23:30] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.162045] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.989029] systemd-fstab-generator[468]: Ignoring "noauto" option for root device
	[  +0.101466] systemd-fstab-generator[487]: Ignoring "noauto" option for root device
	[  +1.930620] systemd-fstab-generator[1017]: Ignoring "noauto" option for root device
	[  +0.060770] kauditd_printk_skb: 79 callbacks suppressed
	[  +0.229646] systemd-fstab-generator[1112]: Ignoring "noauto" option for root device
	[  +0.119765] systemd-fstab-generator[1124]: Ignoring "noauto" option for root device
	[  +0.123401] systemd-fstab-generator[1138]: Ignoring "noauto" option for root device
	[  +2.409334] systemd-fstab-generator[1357]: Ignoring "noauto" option for root device
	[  +0.114639] systemd-fstab-generator[1369]: Ignoring "noauto" option for root device
	[  +0.103538] systemd-fstab-generator[1381]: Ignoring "noauto" option for root device
	[  +0.135144] systemd-fstab-generator[1396]: Ignoring "noauto" option for root device
	[  +0.456371] systemd-fstab-generator[1560]: Ignoring "noauto" option for root device
	[  +6.803779] kauditd_printk_skb: 234 callbacks suppressed
	[ +21.488008] kauditd_printk_skb: 40 callbacks suppressed
	[ +18.019929] kauditd_printk_skb: 21 callbacks suppressed
	[Aug15 23:31] kauditd_printk_skb: 55 callbacks suppressed
	
	
	==> etcd [59dac0b44544] <==
	{"level":"info","ts":"2024-08-15T23:29:46.384063Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1864] sent MsgPreVote request to 2399e7dba5b18dfe at term 2"}
	{"level":"info","ts":"2024-08-15T23:29:46.384495Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1864] sent MsgPreVote request to c8daa22dc1df7d56 at term 2"}
	{"level":"warn","ts":"2024-08-15T23:29:46.408477Z","caller":"etcdserver/server.go:2139","msg":"failed to publish local member to cluster through raft","local-member-id":"b8c6c7563d17d844","local-member-attributes":"{Name:ha-138000 ClientURLs:[https://192.169.0.5:2379]}","request-path":"/0/members/b8c6c7563d17d844/attributes","publish-timeout":"7s","error":"etcdserver: request timed out"}
	{"level":"warn","ts":"2024-08-15T23:29:46.415071Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"2399e7dba5b18dfe","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T23:29:46.415120Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"2399e7dba5b18dfe","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T23:29:46.419833Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c8daa22dc1df7d56","rtt":"0s","error":"dial tcp 192.169.0.7:2380: i/o timeout"}
	{"level":"warn","ts":"2024-08-15T23:29:46.419980Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"c8daa22dc1df7d56","rtt":"0s","error":"dial tcp 192.169.0.7:2380: i/o timeout"}
	{"level":"warn","ts":"2024-08-15T23:29:46.732045Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740419357201672,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-15T23:29:47.233019Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740419357201672,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-08-15T23:29:47.382392Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-15T23:29:47.382847Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-15T23:29:47.383118Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-08-15T23:29:47.383277Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1864] sent MsgPreVote request to 2399e7dba5b18dfe at term 2"}
	{"level":"info","ts":"2024-08-15T23:29:47.383307Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1864] sent MsgPreVote request to c8daa22dc1df7d56 at term 2"}
	{"level":"warn","ts":"2024-08-15T23:29:47.734052Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740419357201672,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-15T23:29:48.244565Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740419357201672,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-08-15T23:29:48.381923Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-15T23:29:48.382007Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-15T23:29:48.382026Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-08-15T23:29:48.382045Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1864] sent MsgPreVote request to 2399e7dba5b18dfe at term 2"}
	{"level":"info","ts":"2024-08-15T23:29:48.382066Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1864] sent MsgPreVote request to c8daa22dc1df7d56 at term 2"}
	{"level":"warn","ts":"2024-08-15T23:29:48.745537Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740419357201672,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-15T23:29:49.013739Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"4.788785781s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-08-15T23:29:49.013790Z","caller":"traceutil/trace.go:171","msg":"trace[283476530] range","detail":"{range_begin:; range_end:; }","duration":"4.78884981s","start":"2024-08-15T23:29:44.224933Z","end":"2024-08-15T23:29:49.013782Z","steps":["trace[283476530] 'agreement among raft nodes before linearized reading'  (duration: 4.788783568s)"],"step_count":1}
	{"level":"error","ts":"2024-08-15T23:29:49.013846Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[-]linearizable_read failed: context canceled\n[+]data_corruption ok\n[+]serializable_read ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	
	
	==> etcd [67b207257b40] <==
	{"level":"info","ts":"2024-08-15T23:31:38.864626Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c8daa22dc1df7d56"}
	{"level":"warn","ts":"2024-08-15T23:31:40.245395Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c8daa22dc1df7d56","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"info","ts":"2024-08-15T23:33:44.790998Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 switched to configuration voters=(2565336393327939070 13314548521573537860 14473058669918387542) learners=(16392460569644178297)"}
	{"level":"info","ts":"2024-08-15T23:33:44.791775Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844","added-peer-id":"e37db809803d1b79","added-peer-peer-urls":["https://192.169.0.9:2380"]}
	{"level":"info","ts":"2024-08-15T23:33:44.791839Z","caller":"rafthttp/peer.go:133","msg":"starting remote peer","remote-peer-id":"e37db809803d1b79"}
	{"level":"info","ts":"2024-08-15T23:33:44.792046Z","caller":"rafthttp/pipeline.go:72","msg":"started HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"e37db809803d1b79"}
	{"level":"info","ts":"2024-08-15T23:33:44.792379Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"e37db809803d1b79"}
	{"level":"info","ts":"2024-08-15T23:33:44.792754Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"e37db809803d1b79"}
	{"level":"info","ts":"2024-08-15T23:33:44.792761Z","caller":"rafthttp/peer.go:137","msg":"started remote peer","remote-peer-id":"e37db809803d1b79"}
	{"level":"info","ts":"2024-08-15T23:33:44.792771Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"e37db809803d1b79"}
	{"level":"info","ts":"2024-08-15T23:33:44.792821Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"e37db809803d1b79"}
	{"level":"info","ts":"2024-08-15T23:33:44.793195Z","caller":"rafthttp/transport.go:317","msg":"added remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"e37db809803d1b79","remote-peer-urls":["https://192.169.0.9:2380"]}
	{"level":"warn","ts":"2024-08-15T23:33:44.868346Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"e37db809803d1b79","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2024-08-15T23:33:45.370156Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"e37db809803d1b79","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2024-08-15T23:33:45.861476Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"e37db809803d1b79","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2024-08-15T23:33:45.911420Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"e37db809803d1b79"}
	{"level":"info","ts":"2024-08-15T23:33:45.915155Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"e37db809803d1b79"}
	{"level":"info","ts":"2024-08-15T23:33:45.928161Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"e37db809803d1b79"}
	{"level":"info","ts":"2024-08-15T23:33:45.965642Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"e37db809803d1b79","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-15T23:33:45.965845Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"e37db809803d1b79"}
	{"level":"info","ts":"2024-08-15T23:33:45.971954Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"e37db809803d1b79","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-15T23:33:45.972027Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"e37db809803d1b79"}
	{"level":"info","ts":"2024-08-15T23:33:46.864082Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 switched to configuration voters=(2565336393327939070 13314548521573537860 14473058669918387542 16392460569644178297)"}
	{"level":"info","ts":"2024-08-15T23:33:46.864762Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844"}
	{"level":"info","ts":"2024-08-15T23:33:46.864975Z","caller":"etcdserver/server.go:1996","msg":"applied a configuration change through raft","local-member-id":"b8c6c7563d17d844","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"e37db809803d1b79"}
	
	
	==> kernel <==
	 23:34:20 up 4 min,  0 users,  load average: 0.32, 0.22, 0.10
	Linux ha-138000 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [c2a16126718b] <==
	I0815 23:23:47.704130       1 main.go:322] Node ha-138000-m04 has CIDR [10.244.3.0/24] 
	I0815 23:23:57.712115       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0815 23:23:57.712139       1 main.go:299] handling current node
	I0815 23:23:57.712152       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0815 23:23:57.712157       1 main.go:322] Node ha-138000-m02 has CIDR [10.244.1.0/24] 
	I0815 23:23:57.712420       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0815 23:23:57.712543       1 main.go:322] Node ha-138000-m03 has CIDR [10.244.2.0/24] 
	I0815 23:23:57.712720       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0815 23:23:57.712823       1 main.go:322] Node ha-138000-m04 has CIDR [10.244.3.0/24] 
	I0815 23:24:07.712424       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0815 23:24:07.712474       1 main.go:299] handling current node
	I0815 23:24:07.712488       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0815 23:24:07.712494       1 main.go:322] Node ha-138000-m02 has CIDR [10.244.1.0/24] 
	I0815 23:24:07.712623       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0815 23:24:07.712704       1 main.go:322] Node ha-138000-m03 has CIDR [10.244.2.0/24] 
	I0815 23:24:07.712814       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0815 23:24:07.712851       1 main.go:322] Node ha-138000-m04 has CIDR [10.244.3.0/24] 
	I0815 23:24:17.705680       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0815 23:24:17.705716       1 main.go:322] Node ha-138000-m02 has CIDR [10.244.1.0/24] 
	I0815 23:24:17.706225       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0815 23:24:17.706282       1 main.go:322] Node ha-138000-m03 has CIDR [10.244.2.0/24] 
	I0815 23:24:17.706514       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0815 23:24:17.706582       1 main.go:322] Node ha-138000-m04 has CIDR [10.244.3.0/24] 
	I0815 23:24:17.706957       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0815 23:24:17.707108       1 main.go:299] handling current node
	
	
	==> kindnet [d35ee4327270] <==
	I0815 23:34:00.106196       1 main.go:322] Node ha-138000-m03 has CIDR [10.244.2.0/24] 
	I0815 23:34:00.106364       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0815 23:34:00.106472       1 main.go:322] Node ha-138000-m04 has CIDR [10.244.3.0/24] 
	I0815 23:34:00.106635       1 main.go:295] Handling node with IPs: map[192.169.0.9:{}]
	I0815 23:34:00.106705       1 main.go:322] Node ha-138000-m05 has CIDR [10.244.4.0/24] 
	I0815 23:34:10.106586       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0815 23:34:10.106637       1 main.go:299] handling current node
	I0815 23:34:10.106650       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0815 23:34:10.106656       1 main.go:322] Node ha-138000-m02 has CIDR [10.244.1.0/24] 
	I0815 23:34:10.107007       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0815 23:34:10.107108       1 main.go:322] Node ha-138000-m03 has CIDR [10.244.2.0/24] 
	I0815 23:34:10.107444       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0815 23:34:10.107485       1 main.go:322] Node ha-138000-m04 has CIDR [10.244.3.0/24] 
	I0815 23:34:10.107537       1 main.go:295] Handling node with IPs: map[192.169.0.9:{}]
	I0815 23:34:10.107543       1 main.go:322] Node ha-138000-m05 has CIDR [10.244.4.0/24] 
	I0815 23:34:20.107203       1 main.go:295] Handling node with IPs: map[192.169.0.9:{}]
	I0815 23:34:20.107280       1 main.go:322] Node ha-138000-m05 has CIDR [10.244.4.0/24] 
	I0815 23:34:20.107410       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0815 23:34:20.107433       1 main.go:299] handling current node
	I0815 23:34:20.107605       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0815 23:34:20.107728       1 main.go:322] Node ha-138000-m02 has CIDR [10.244.1.0/24] 
	I0815 23:34:20.107971       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0815 23:34:20.108094       1 main.go:322] Node ha-138000-m03 has CIDR [10.244.2.0/24] 
	I0815 23:34:20.108222       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0815 23:34:20.108267       1 main.go:322] Node ha-138000-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [5ed11c46e0eb] <==
	I0815 23:29:32.056397       1 options.go:228] external host was not specified, using 192.169.0.5
	I0815 23:29:32.057840       1 server.go:142] Version: v1.31.0
	I0815 23:29:32.057961       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 23:29:32.445995       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0815 23:29:32.449536       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0815 23:29:32.452083       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0815 23:29:32.452114       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0815 23:29:32.452276       1 instance.go:232] Using reconciler: lease
	W0815 23:29:49.041556       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:33594->127.0.0.1:2379: read: connection reset by peer"
	W0815 23:29:49.041696       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:33564->127.0.0.1:2379: read: connection reset by peer"
	W0815 23:29:49.041767       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:33580->127.0.0.1:2379: read: connection reset by peer"
	W0815 23:29:50.044022       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 23:29:50.044031       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 23:29:50.044267       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 23:29:51.372028       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 23:29:51.388445       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 23:29:51.855782       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0815 23:29:52.453885       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [a6baf6e21d6c] <==
	I0815 23:30:40.344140       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0815 23:30:40.344259       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0815 23:30:40.418768       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0815 23:30:40.419548       1 shared_informer.go:320] Caches are synced for configmaps
	I0815 23:30:40.420315       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0815 23:30:40.420931       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0815 23:30:40.424034       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0815 23:30:40.424129       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0815 23:30:40.424470       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0815 23:30:40.424883       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0815 23:30:40.425391       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0815 23:30:40.425745       1 aggregator.go:171] initial CRD sync complete...
	I0815 23:30:40.425776       1 autoregister_controller.go:144] Starting autoregister controller
	I0815 23:30:40.425782       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0815 23:30:40.425786       1 cache.go:39] Caches are synced for autoregister controller
	I0815 23:30:40.429758       1 shared_informer.go:320] Caches are synced for node_authorizer
	W0815 23:30:40.433000       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.6]
	I0815 23:30:40.451364       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0815 23:30:40.451641       1 policy_source.go:224] refreshing policies
	I0815 23:30:40.467536       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0815 23:30:40.536982       1 controller.go:615] quota admission added evaluator for: endpoints
	I0815 23:30:40.548680       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0815 23:30:40.556609       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0815 23:30:41.331073       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0815 23:30:41.666666       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.5]
	
	
	==> kube-controller-manager [2ed9ae042726] <==
	I0815 23:30:20.677986       1 serving.go:386] Generated self-signed cert in-memory
	I0815 23:30:20.928931       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0815 23:30:20.928987       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 23:30:20.930507       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0815 23:30:20.930593       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0815 23:30:20.931118       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0815 23:30:20.931317       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0815 23:30:40.940723       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-controller-manager [f4a0ec142726] <==
	E0815 23:33:44.354831       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-wqjwr failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-wqjwr\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0815 23:33:44.494010       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-138000-m05\" does not exist"
	I0815 23:33:44.494178       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-138000-m04"
	I0815 23:33:44.504885       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-138000-m05" podCIDRs=["10.244.4.0/24"]
	I0815 23:33:44.504924       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m05"
	I0815 23:33:44.505303       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m05"
	I0815 23:33:44.549041       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m05"
	I0815 23:33:44.744603       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m05"
	I0815 23:33:44.817084       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m05"
	I0815 23:33:46.814638       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-138000-m05"
	I0815 23:33:46.815166       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m05"
	I0815 23:33:46.866626       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m05"
	I0815 23:33:47.077137       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m05"
	I0815 23:33:47.734826       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m05"
	I0815 23:33:47.818103       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m05"
	I0815 23:33:49.474243       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m05"
	I0815 23:33:49.514611       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m05"
	I0815 23:33:52.481257       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m05"
	I0815 23:33:52.570383       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m05"
	I0815 23:33:54.926748       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m05"
	I0815 23:34:04.664748       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-138000-m04"
	I0815 23:34:04.665628       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m05"
	I0815 23:34:04.674364       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m05"
	I0815 23:34:04.780604       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m05"
	I0815 23:34:15.243177       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-138000-m05"
	
	
	==> kube-proxy [3102e608c7d6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 23:30:59.351348       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0815 23:30:59.378221       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0815 23:30:59.378378       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 23:30:59.417171       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0815 23:30:59.417213       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0815 23:30:59.417230       1 server_linux.go:169] "Using iptables Proxier"
	I0815 23:30:59.420831       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 23:30:59.421491       1 server.go:483] "Version info" version="v1.31.0"
	I0815 23:30:59.421522       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 23:30:59.424760       1 config.go:197] "Starting service config controller"
	I0815 23:30:59.425626       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 23:30:59.426090       1 config.go:104] "Starting endpoint slice config controller"
	I0815 23:30:59.426116       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 23:30:59.427803       1 config.go:326] "Starting node config controller"
	I0815 23:30:59.428510       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 23:30:59.526834       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0815 23:30:59.526859       1 shared_informer.go:320] Caches are synced for service config
	I0815 23:30:59.528661       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [fc2e141007ef] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 23:19:33.922056       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0815 23:19:33.939645       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0815 23:19:33.939881       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 23:19:33.966815       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0815 23:19:33.966963       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0815 23:19:33.967061       1 server_linux.go:169] "Using iptables Proxier"
	I0815 23:19:33.969119       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 23:19:33.969437       1 server.go:483] "Version info" version="v1.31.0"
	I0815 23:19:33.969466       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 23:19:33.970289       1 config.go:197] "Starting service config controller"
	I0815 23:19:33.970403       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 23:19:33.970441       1 config.go:104] "Starting endpoint slice config controller"
	I0815 23:19:33.970446       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 23:19:33.970870       1 config.go:326] "Starting node config controller"
	I0815 23:19:33.970895       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 23:19:34.070930       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0815 23:19:34.070930       1 shared_informer.go:320] Caches are synced for node config
	I0815 23:19:34.070944       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [ac6935271595] <==
	W0815 23:29:03.654257       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.169.0.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0815 23:29:03.654675       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.169.0.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0815 23:29:04.192220       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0815 23:29:04.192311       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0815 23:29:07.683875       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0815 23:29:07.683942       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0815 23:29:07.708489       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.169.0.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0815 23:29:07.708791       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.169.0.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0815 23:29:17.257133       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0815 23:29:17.257240       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0815 23:29:26.626316       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0815 23:29:26.626443       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0815 23:29:29.967116       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.169.0.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0815 23:29:29.967155       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.169.0.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0815 23:29:42.147720       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.169.0.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	E0815 23:29:42.148149       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.169.0.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError"
	W0815 23:29:43.616204       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	E0815 23:29:43.616440       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError"
	W0815 23:29:45.922991       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	E0815 23:29:45.923106       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError"
	E0815 23:29:49.027901       1 server.go:267] "waiting for handlers to sync" err="context canceled"
	I0815 23:29:49.028326       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0815 23:29:49.028478       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0815 23:29:49.028500       1 shared_informer.go:316] "Unhandled Error" err="unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file" logger="UnhandledError"
	E0815 23:29:49.029058       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [c2ddb52a9846] <==
	I0815 23:30:20.706878       1 serving.go:386] Generated self-signed cert in-memory
	W0815 23:30:31.075526       1 authentication.go:370] Error looking up in-cluster authentication configuration: Get "https://192.169.0.5:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W0815 23:30:31.075552       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0815 23:30:31.075556       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0815 23:30:40.370669       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0815 23:30:40.370712       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 23:30:40.375435       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0815 23:30:40.379182       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0815 23:30:40.379313       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0815 23:30:40.379473       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0815 23:30:40.480276       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0815 23:33:44.560164       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-446bp\": pod kube-proxy-446bp is already assigned to node \"ha-138000-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-446bp" node="ha-138000-m05"
	E0815 23:33:44.560266       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-446bp\": pod kube-proxy-446bp is already assigned to node \"ha-138000-m05\"" pod="kube-system/kube-proxy-446bp"
	E0815 23:33:44.557333       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-dwbgv\": pod kube-proxy-dwbgv is already assigned to node \"ha-138000-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-dwbgv" node="ha-138000-m05"
	E0815 23:33:44.561697       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod b2af37e5-4561-41a8-abff-c4b6f4042f0f(kube-system/kube-proxy-dwbgv) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-dwbgv"
	E0815 23:33:44.566064       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-dwbgv\": pod kube-proxy-dwbgv is already assigned to node \"ha-138000-m05\"" pod="kube-system/kube-proxy-dwbgv"
	I0815 23:33:44.566120       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-dwbgv" node="ha-138000-m05"
	E0815 23:33:44.566395       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-qdhwz\": pod kindnet-qdhwz is already assigned to node \"ha-138000-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-qdhwz" node="ha-138000-m05"
	E0815 23:33:44.566731       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-qdhwz\": pod kindnet-qdhwz is already assigned to node \"ha-138000-m05\"" pod="kube-system/kindnet-qdhwz"
	
	
	==> kubelet <==
	Aug 15 23:30:59 ha-138000 kubelet[1567]: I0815 23:30:59.824309    1567 scope.go:117] "RemoveContainer" containerID="6a1122913bb1811dd9cfff9fde8c221a2c969f80db1f0bcc1a66f58faaa88395"
	Aug 15 23:31:00 ha-138000 kubelet[1567]: I0815 23:31:00.825729    1567 scope.go:117] "RemoveContainer" containerID="8f20284cd3969cd69aa4dd7eb37b8d05c7df4f53aa8c6f636949fd401174eba1"
	Aug 15 23:31:01 ha-138000 kubelet[1567]: I0815 23:31:01.825360    1567 scope.go:117] "RemoveContainer" containerID="42f5d82b004174c93ffa1441e156ff5ca6d23b9457598805927d06b8823a41bd"
	Aug 15 23:31:03 ha-138000 kubelet[1567]: I0815 23:31:03.825285    1567 scope.go:117] "RemoveContainer" containerID="2ed9ae04272666896274c0cc9cbac7e240c18a02b0b35eaab975e10a79d1a635"
	Aug 15 23:31:12 ha-138000 kubelet[1567]: E0815 23:31:12.861012    1567 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 15 23:31:12 ha-138000 kubelet[1567]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 15 23:31:12 ha-138000 kubelet[1567]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 15 23:31:12 ha-138000 kubelet[1567]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 15 23:31:12 ha-138000 kubelet[1567]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 15 23:31:12 ha-138000 kubelet[1567]: I0815 23:31:12.976621    1567 scope.go:117] "RemoveContainer" containerID="e919017e14bb91f5bec7b5fdf0351f27904f841341d654e814d90d000a091f26"
	Aug 15 23:32:12 ha-138000 kubelet[1567]: E0815 23:32:12.862060    1567 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 15 23:32:12 ha-138000 kubelet[1567]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 15 23:32:12 ha-138000 kubelet[1567]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 15 23:32:12 ha-138000 kubelet[1567]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 15 23:32:12 ha-138000 kubelet[1567]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 15 23:33:12 ha-138000 kubelet[1567]: E0815 23:33:12.860851    1567 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 15 23:33:12 ha-138000 kubelet[1567]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 15 23:33:12 ha-138000 kubelet[1567]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 15 23:33:12 ha-138000 kubelet[1567]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 15 23:33:12 ha-138000 kubelet[1567]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 15 23:34:12 ha-138000 kubelet[1567]: E0815 23:34:12.860978    1567 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 15 23:34:12 ha-138000 kubelet[1567]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 15 23:34:12 ha-138000 kubelet[1567]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 15 23:34:12 ha-138000 kubelet[1567]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 15 23:34:12 ha-138000 kubelet[1567]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-138000 -n ha-138000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-138000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (4.70s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (136.65s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-736000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit 
E0815 16:38:57.690997    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/addons-640000/client.crt: no such file or directory" logger="UnhandledError"
E0815 16:39:00.945136    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/functional-506000/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p mount-start-1-736000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit : exit status 80 (2m16.573472414s)

                                                
                                                
-- stdout --
	* [mount-start-1-736000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-977/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-977/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting minikube without Kubernetes in cluster mount-start-1-736000
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "mount-start-1-736000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for a6:7d:2c:2:1:c9
	* Failed to start hyperkit VM. Running "minikube delete -p mount-start-1-736000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for a2:34:c5:66:3f:d3
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for a2:34:c5:66:3f:d3
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-amd64 start -p mount-start-1-736000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-1-736000 -n mount-start-1-736000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-1-736000 -n mount-start-1-736000: exit status 7 (78.886196ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 16:40:25.284296    4655 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0815 16:40:25.284320    4655 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-736000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMountStart/serial/StartWithMountFirst (136.65s)

                                                
                                    
x
+
TestScheduledStopUnix (141.86s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-914000 --memory=2048 --driver=hyperkit 
E0815 16:52:37.982330    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/functional-506000/client.crt: no such file or directory" logger="UnhandledError"
E0815 16:53:57.802006    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/addons-640000/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p scheduled-stop-914000 --memory=2048 --driver=hyperkit : exit status 80 (2m16.470868742s)

                                                
                                                
-- stdout --
	* [scheduled-stop-914000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-977/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-977/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "scheduled-stop-914000" primary control-plane node in "scheduled-stop-914000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "scheduled-stop-914000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ee:30:3:dd:ff:86
	* Failed to start hyperkit VM. Running "minikube delete -p scheduled-stop-914000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for c2:92:a7:8c:65:f4
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for c2:92:a7:8c:65:f4
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-914000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-977/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-977/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "scheduled-stop-914000" primary control-plane node in "scheduled-stop-914000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "scheduled-stop-914000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ee:30:3:dd:ff:86
	* Failed to start hyperkit VM. Running "minikube delete -p scheduled-stop-914000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for c2:92:a7:8c:65:f4
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for c2:92:a7:8c:65:f4
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-08-15 16:54:08.72168 -0700 PDT m=+2955.660941669
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-914000 -n scheduled-stop-914000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-914000 -n scheduled-stop-914000: exit status 7 (78.244406ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 16:54:08.798264    5483 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0815 16:54:08.798283    5483 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-914000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "scheduled-stop-914000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-914000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-914000: (5.312956891s)
--- FAIL: TestScheduledStopUnix (141.86s)

                                                
                                    
x
+
TestPause/serial/Start (141.8s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-052000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p pause-052000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit : exit status 80 (2m21.725525407s)

                                                
                                                
-- stdout --
	* [pause-052000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-977/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-977/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "pause-052000" primary control-plane node in "pause-052000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "pause-052000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for a2:d1:de:a9:b7:c3
	* Failed to start hyperkit VM. Running "minikube delete -p pause-052000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for f6:42:9a:25:f:78
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for f6:42:9a:25:f:78
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-amd64 start -p pause-052000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-052000 -n pause-052000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-052000 -n pause-052000: exit status 7 (79.007191ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 17:02:59.961678    6436 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0815 17:02:59.961698    6436 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-052000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestPause/serial/Start (141.80s)

                                                
                                    

Test pass (290/327)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 25.93
4 TestDownloadOnly/v1.20.0/preload-exists 0
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.29
9 TestDownloadOnly/v1.20.0/DeleteAll 0.23
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.21
12 TestDownloadOnly/v1.31.0/json-events 12.25
13 TestDownloadOnly/v1.31.0/preload-exists 0
16 TestDownloadOnly/v1.31.0/kubectl 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.3
18 TestDownloadOnly/v1.31.0/DeleteAll 0.23
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.21
21 TestBinaryMirror 0.98
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.17
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.19
27 TestAddons/Setup 203.47
29 TestAddons/serial/Volcano 39.64
31 TestAddons/serial/GCPAuth/Namespaces 0.1
33 TestAddons/parallel/Registry 15.54
34 TestAddons/parallel/Ingress 19.41
35 TestAddons/parallel/InspektorGadget 10.54
36 TestAddons/parallel/MetricsServer 5.57
37 TestAddons/parallel/HelmTiller 9.92
39 TestAddons/parallel/CSI 41.77
40 TestAddons/parallel/Headlamp 41.22
41 TestAddons/parallel/CloudSpanner 5.44
42 TestAddons/parallel/LocalPath 53.69
43 TestAddons/parallel/NvidiaDevicePlugin 5.34
44 TestAddons/parallel/Yakd 10.49
45 TestAddons/StoppedEnableDisable 5.92
53 TestHyperKitDriverInstallOrUpdate 11.06
56 TestErrorSpam/setup 35.95
57 TestErrorSpam/start 1.76
58 TestErrorSpam/status 0.53
59 TestErrorSpam/pause 1.41
60 TestErrorSpam/unpause 1.44
61 TestErrorSpam/stop 106.8
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 88.09
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 40.5
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.07
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.13
73 TestFunctional/serial/CacheCmd/cache/add_local 1.35
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
75 TestFunctional/serial/CacheCmd/cache/list 0.08
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.17
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.08
78 TestFunctional/serial/CacheCmd/cache/delete 0.16
79 TestFunctional/serial/MinikubeKubectlCmd 1.25
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.58
81 TestFunctional/serial/ExtraConfig 42.83
82 TestFunctional/serial/ComponentHealth 0.05
83 TestFunctional/serial/LogsCmd 2.65
84 TestFunctional/serial/LogsFileCmd 2.71
85 TestFunctional/serial/InvalidService 4.38
87 TestFunctional/parallel/ConfigCmd 0.47
88 TestFunctional/parallel/DashboardCmd 12.54
89 TestFunctional/parallel/DryRun 0.98
90 TestFunctional/parallel/InternationalLanguage 0.49
91 TestFunctional/parallel/StatusCmd 0.52
95 TestFunctional/parallel/ServiceCmdConnect 7.58
96 TestFunctional/parallel/AddonsCmd 0.23
97 TestFunctional/parallel/PersistentVolumeClaim 28.18
99 TestFunctional/parallel/SSHCmd 0.31
100 TestFunctional/parallel/CpCmd 1.12
101 TestFunctional/parallel/MySQL 22.15
102 TestFunctional/parallel/FileSync 0.23
103 TestFunctional/parallel/CertSync 1.07
107 TestFunctional/parallel/NodeLabels 0.05
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.16
111 TestFunctional/parallel/License 0.58
112 TestFunctional/parallel/Version/short 0.1
113 TestFunctional/parallel/Version/components 0.52
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.2
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.16
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.16
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.16
118 TestFunctional/parallel/ImageCommands/ImageBuild 2.29
119 TestFunctional/parallel/ImageCommands/Setup 1.86
120 TestFunctional/parallel/DockerEnv/bash 0.63
121 TestFunctional/parallel/UpdateContextCmd/no_changes 0.26
122 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.23
123 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.25
124 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.12
125 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.71
126 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.48
127 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.38
128 TestFunctional/parallel/ImageCommands/ImageRemove 0.37
129 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.74
130 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.43
131 TestFunctional/parallel/ServiceCmd/DeployApp 20.11
133 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.38
134 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
136 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.14
137 TestFunctional/parallel/ServiceCmd/List 0.38
138 TestFunctional/parallel/ServiceCmd/JSONOutput 0.37
139 TestFunctional/parallel/ServiceCmd/HTTPS 0.27
140 TestFunctional/parallel/ServiceCmd/Format 0.26
141 TestFunctional/parallel/ServiceCmd/URL 0.28
142 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
143 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.02
144 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.04
145 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.03
146 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.02
147 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.13
148 TestFunctional/parallel/ProfileCmd/profile_not_create 0.26
149 TestFunctional/parallel/ProfileCmd/profile_list 0.26
150 TestFunctional/parallel/ProfileCmd/profile_json_output 0.26
151 TestFunctional/parallel/MountCmd/any-port 6.23
152 TestFunctional/parallel/MountCmd/specific-port 1.35
153 TestFunctional/parallel/MountCmd/VerifyCleanup 1.92
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 198.2
161 TestMultiControlPlane/serial/DeployApp 5.3
162 TestMultiControlPlane/serial/PingHostFromPods 1.3
163 TestMultiControlPlane/serial/AddWorkerNode 49.79
164 TestMultiControlPlane/serial/NodeLabels 0.06
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.35
166 TestMultiControlPlane/serial/CopyFile 9.37
167 TestMultiControlPlane/serial/StopSecondaryNode 8.7
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.27
169 TestMultiControlPlane/serial/RestartSecondaryNode 40.42
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.35
181 TestImageBuild/serial/Setup 38.09
182 TestImageBuild/serial/NormalBuild 1.66
183 TestImageBuild/serial/BuildWithBuildArg 0.75
184 TestImageBuild/serial/BuildWithDockerIgnore 0.6
185 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.67
189 TestJSONOutput/start/Command 51.87
190 TestJSONOutput/start/Audit 0
192 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/pause/Command 0.48
196 TestJSONOutput/pause/Audit 0
198 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/unpause/Command 0.45
202 TestJSONOutput/unpause/Audit 0
204 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
207 TestJSONOutput/stop/Command 8.34
208 TestJSONOutput/stop/Audit 0
210 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
211 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
212 TestErrorJSONOutput 0.57
217 TestMainNoArgs 0.08
218 TestMinikubeProfile 89.02
224 TestMultiNode/serial/FreshStart2Nodes 106.33
225 TestMultiNode/serial/DeployApp2Nodes 4.43
226 TestMultiNode/serial/PingHostFrom2Pods 0.89
227 TestMultiNode/serial/AddNode 45.47
228 TestMultiNode/serial/MultiNodeLabels 0.05
229 TestMultiNode/serial/ProfileList 0.18
230 TestMultiNode/serial/CopyFile 5.23
231 TestMultiNode/serial/StopNode 2.84
232 TestMultiNode/serial/StartAfterStop 36.51
233 TestMultiNode/serial/RestartKeepsNodes 151.09
234 TestMultiNode/serial/DeleteNode 3.26
235 TestMultiNode/serial/StopMultiNode 16.77
236 TestMultiNode/serial/RestartMultiNode 107.79
237 TestMultiNode/serial/ValidateNameConflict 44.27
241 TestPreload 145.74
244 TestSkaffold 114.69
247 TestRunningBinaryUpgrade 75.74
249 TestKubernetesUpgrade 203.94
262 TestStoppedBinaryUpgrade/Setup 2.07
263 TestStoppedBinaryUpgrade/Upgrade 112.51
264 TestStoppedBinaryUpgrade/MinikubeLogs 2.93
275 TestNoKubernetes/serial/StartNoK8sWithVersion 0.48
276 TestNoKubernetes/serial/StartWithK8s 85.3
277 TestNoKubernetes/serial/StartWithStopK8s 56.64
278 TestNoKubernetes/serial/Start 70.4
279 TestNoKubernetes/serial/VerifyK8sNotRunning 0.13
280 TestNoKubernetes/serial/ProfileList 0.37
281 TestNoKubernetes/serial/Stop 2.36
282 TestNoKubernetes/serial/StartNoArgs 75.58
283 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 3.73
284 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 7.74
285 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.16
286 TestNetworkPlugins/group/auto/Start 684.29
287 TestNetworkPlugins/group/auto/KubeletFlags 0.15
288 TestNetworkPlugins/group/auto/NetCatPod 12.14
289 TestNetworkPlugins/group/auto/DNS 0.13
290 TestNetworkPlugins/group/auto/Localhost 0.1
291 TestNetworkPlugins/group/auto/HairPin 0.1
292 TestNetworkPlugins/group/kindnet/Start 656.65
293 TestNetworkPlugins/group/calico/Start 199.36
294 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
295 TestNetworkPlugins/group/kindnet/KubeletFlags 0.15
296 TestNetworkPlugins/group/kindnet/NetCatPod 12.14
297 TestNetworkPlugins/group/kindnet/DNS 0.13
298 TestNetworkPlugins/group/kindnet/Localhost 0.1
299 TestNetworkPlugins/group/kindnet/HairPin 0.1
300 TestNetworkPlugins/group/custom-flannel/Start 54.92
301 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.15
302 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.15
303 TestNetworkPlugins/group/custom-flannel/DNS 0.14
304 TestNetworkPlugins/group/custom-flannel/Localhost 0.11
305 TestNetworkPlugins/group/custom-flannel/HairPin 0.1
306 TestNetworkPlugins/group/false/Start 81.44
307 TestNetworkPlugins/group/calico/ControllerPod 6
308 TestNetworkPlugins/group/calico/KubeletFlags 0.18
309 TestNetworkPlugins/group/calico/NetCatPod 12.15
310 TestNetworkPlugins/group/calico/DNS 0.13
311 TestNetworkPlugins/group/calico/Localhost 0.1
312 TestNetworkPlugins/group/calico/HairPin 0.1
313 TestNetworkPlugins/group/enable-default-cni/Start 81.4
314 TestNetworkPlugins/group/false/KubeletFlags 0.18
315 TestNetworkPlugins/group/false/NetCatPod 12.18
316 TestNetworkPlugins/group/false/DNS 0.17
317 TestNetworkPlugins/group/false/Localhost 0.1
318 TestNetworkPlugins/group/false/HairPin 0.1
319 TestNetworkPlugins/group/flannel/Start 51.11
320 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.17
321 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.15
322 TestNetworkPlugins/group/enable-default-cni/DNS 0.12
323 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
324 TestNetworkPlugins/group/enable-default-cni/HairPin 0.11
325 TestNetworkPlugins/group/flannel/ControllerPod 6
326 TestNetworkPlugins/group/bridge/Start 78.94
327 TestNetworkPlugins/group/flannel/KubeletFlags 0.19
328 TestNetworkPlugins/group/flannel/NetCatPod 12.33
329 TestNetworkPlugins/group/flannel/DNS 0.12
330 TestNetworkPlugins/group/flannel/Localhost 0.1
331 TestNetworkPlugins/group/flannel/HairPin 0.1
332 TestNetworkPlugins/group/kubenet/Start 46.62
333 TestNetworkPlugins/group/kubenet/KubeletFlags 0.15
334 TestNetworkPlugins/group/kubenet/NetCatPod 13.14
335 TestNetworkPlugins/group/bridge/KubeletFlags 0.15
336 TestNetworkPlugins/group/bridge/NetCatPod 11.14
337 TestNetworkPlugins/group/bridge/DNS 0.12
338 TestNetworkPlugins/group/bridge/Localhost 0.1
339 TestNetworkPlugins/group/bridge/HairPin 0.1
340 TestNetworkPlugins/group/kubenet/DNS 21.3
342 TestStartStop/group/old-k8s-version/serial/FirstStart 141.37
343 TestNetworkPlugins/group/kubenet/Localhost 0.11
344 TestNetworkPlugins/group/kubenet/HairPin 0.1
346 TestStartStop/group/no-preload/serial/FirstStart 90.03
347 TestStartStop/group/no-preload/serial/DeployApp 9.21
348 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.74
349 TestStartStop/group/no-preload/serial/Stop 8.39
350 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.38
351 TestStartStop/group/no-preload/serial/SecondStart 293.05
352 TestStartStop/group/old-k8s-version/serial/DeployApp 7.34
353 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.76
354 TestStartStop/group/old-k8s-version/serial/Stop 8.41
355 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.32
356 TestStartStop/group/old-k8s-version/serial/SecondStart 403.76
357 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
358 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.06
359 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.17
360 TestStartStop/group/no-preload/serial/Pause 2.01
362 TestStartStop/group/embed-certs/serial/FirstStart 51.56
363 TestStartStop/group/embed-certs/serial/DeployApp 9.2
364 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.8
365 TestStartStop/group/embed-certs/serial/Stop 8.42
366 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.32
367 TestStartStop/group/embed-certs/serial/SecondStart 293.37
368 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
369 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.06
370 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.16
371 TestStartStop/group/old-k8s-version/serial/Pause 1.9
373 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 84.46
374 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.23
375 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.73
376 TestStartStop/group/default-k8s-diff-port/serial/Stop 8.42
377 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.32
378 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 291.65
379 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
380 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.06
381 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.16
382 TestStartStop/group/embed-certs/serial/Pause 1.98
384 TestStartStop/group/newest-cni/serial/FirstStart 41.58
385 TestStartStop/group/newest-cni/serial/DeployApp 0
386 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.77
387 TestStartStop/group/newest-cni/serial/Stop 8.4
388 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.32
389 TestStartStop/group/newest-cni/serial/SecondStart 29.55
390 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
391 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
392 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.18
393 TestStartStop/group/newest-cni/serial/Pause 1.88
394 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
395 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.06
396 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.18
397 TestStartStop/group/default-k8s-diff-port/serial/Pause 1.92
x
+
TestDownloadOnly/v1.20.0/json-events (25.93s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-921000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-921000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperkit : (25.926044754s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (25.93s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-921000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-921000: exit status 85 (291.814257ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-921000 | jenkins | v1.33.1 | 15 Aug 24 16:04 PDT |          |
	|         | -p download-only-921000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 16:04:52
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 16:04:52.917123    1503 out.go:345] Setting OutFile to fd 1 ...
	I0815 16:04:52.917318    1503 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:04:52.917323    1503 out.go:358] Setting ErrFile to fd 2...
	I0815 16:04:52.917327    1503 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:04:52.917482    1503 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-977/.minikube/bin
	W0815 16:04:52.917581    1503 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19452-977/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19452-977/.minikube/config/config.json: no such file or directory
	I0815 16:04:52.919272    1503 out.go:352] Setting JSON to true
	I0815 16:04:52.941577    1503 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":263,"bootTime":1723762829,"procs":416,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0815 16:04:52.941664    1503 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 16:04:52.963686    1503 out.go:97] [download-only-921000] minikube v1.33.1 on Darwin 14.6.1
	W0815 16:04:52.963881    1503 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19452-977/.minikube/cache/preloaded-tarball: no such file or directory
	I0815 16:04:52.963916    1503 notify.go:220] Checking for updates...
	I0815 16:04:52.985098    1503 out.go:169] MINIKUBE_LOCATION=19452
	I0815 16:04:53.006094    1503 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19452-977/kubeconfig
	I0815 16:04:53.027091    1503 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0815 16:04:53.048236    1503 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 16:04:53.069149    1503 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-977/.minikube
	W0815 16:04:53.111178    1503 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0815 16:04:53.111637    1503 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 16:04:53.162120    1503 out.go:97] Using the hyperkit driver based on user configuration
	I0815 16:04:53.162177    1503 start.go:297] selected driver: hyperkit
	I0815 16:04:53.162191    1503 start.go:901] validating driver "hyperkit" against <nil>
	I0815 16:04:53.162421    1503 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 16:04:53.162786    1503 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19452-977/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0815 16:04:53.564594    1503 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0815 16:04:53.569893    1503 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:04:53.569915    1503 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0815 16:04:53.569948    1503 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 16:04:53.574677    1503 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0815 16:04:53.574856    1503 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0815 16:04:53.574887    1503 cni.go:84] Creating CNI manager for ""
	I0815 16:04:53.574901    1503 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0815 16:04:53.574976    1503 start.go:340] cluster config:
	{Name:download-only-921000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-921000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 16:04:53.575195    1503 iso.go:125] acquiring lock: {Name:mkb93fc66b60be15dffa9eb2999a4be8d8d4d218 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 16:04:53.597156    1503 out.go:97] Downloading VM boot image ...
	I0815 16:04:53.597261    1503 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso.sha256 -> /Users/jenkins/minikube-integration/19452-977/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0815 16:05:04.260442    1503 out.go:97] Starting "download-only-921000" primary control-plane node in "download-only-921000" cluster
	I0815 16:05:04.260476    1503 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0815 16:05:04.325845    1503 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0815 16:05:04.325900    1503 cache.go:56] Caching tarball of preloaded images
	I0815 16:05:04.326262    1503 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0815 16:05:04.347650    1503 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0815 16:05:04.347705    1503 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0815 16:05:04.479747    1503 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /Users/jenkins/minikube-integration/19452-977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0815 16:05:13.236691    1503 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0815 16:05:13.236854    1503 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19452-977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0815 16:05:13.786026    1503 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0815 16:05:13.786260    1503 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/download-only-921000/config.json ...
	I0815 16:05:13.786282    1503 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/download-only-921000/config.json: {Name:mk923a6ae016710ed9ef89829b8d7ca446b82384 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:05:13.786584    1503 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0815 16:05:13.786890    1503 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19452-977/.minikube/cache/darwin/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-921000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-921000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-921000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (12.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-254000 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-254000 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=hyperkit : (12.25370652s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (12.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-254000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-254000: exit status 85 (295.083669ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-921000 | jenkins | v1.33.1 | 15 Aug 24 16:04 PDT |                     |
	|         | -p download-only-921000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=hyperkit              |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 15 Aug 24 16:05 PDT | 15 Aug 24 16:05 PDT |
	| delete  | -p download-only-921000        | download-only-921000 | jenkins | v1.33.1 | 15 Aug 24 16:05 PDT | 15 Aug 24 16:05 PDT |
	| start   | -o=json --download-only        | download-only-254000 | jenkins | v1.33.1 | 15 Aug 24 16:05 PDT |                     |
	|         | -p download-only-254000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=hyperkit              |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 16:05:19
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 16:05:19.575349    1544 out.go:345] Setting OutFile to fd 1 ...
	I0815 16:05:19.575535    1544 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:05:19.575540    1544 out.go:358] Setting ErrFile to fd 2...
	I0815 16:05:19.575544    1544 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:05:19.575718    1544 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-977/.minikube/bin
	I0815 16:05:19.577148    1544 out.go:352] Setting JSON to true
	I0815 16:05:19.599511    1544 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":290,"bootTime":1723762829,"procs":412,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0815 16:05:19.599594    1544 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 16:05:19.621462    1544 out.go:97] [download-only-254000] minikube v1.33.1 on Darwin 14.6.1
	I0815 16:05:19.621678    1544 notify.go:220] Checking for updates...
	I0815 16:05:19.644229    1544 out.go:169] MINIKUBE_LOCATION=19452
	I0815 16:05:19.665079    1544 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19452-977/kubeconfig
	I0815 16:05:19.686188    1544 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0815 16:05:19.707401    1544 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 16:05:19.729115    1544 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-977/.minikube
	W0815 16:05:19.771230    1544 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0815 16:05:19.771725    1544 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 16:05:19.802207    1544 out.go:97] Using the hyperkit driver based on user configuration
	I0815 16:05:19.802299    1544 start.go:297] selected driver: hyperkit
	I0815 16:05:19.802325    1544 start.go:901] validating driver "hyperkit" against <nil>
	I0815 16:05:19.802544    1544 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 16:05:19.802778    1544 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19452-977/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0815 16:05:19.812590    1544 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0815 16:05:19.816520    1544 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:05:19.816541    1544 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0815 16:05:19.816570    1544 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 16:05:19.819359    1544 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0815 16:05:19.819500    1544 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0815 16:05:19.819527    1544 cni.go:84] Creating CNI manager for ""
	I0815 16:05:19.819547    1544 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0815 16:05:19.819556    1544 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0815 16:05:19.819620    1544 start.go:340] cluster config:
	{Name:download-only-254000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-254000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 16:05:19.819706    1544 iso.go:125] acquiring lock: {Name:mkb93fc66b60be15dffa9eb2999a4be8d8d4d218 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 16:05:19.841251    1544 out.go:97] Starting "download-only-254000" primary control-plane node in "download-only-254000" cluster
	I0815 16:05:19.841283    1544 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 16:05:19.905535    1544 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0815 16:05:19.905573    1544 cache.go:56] Caching tarball of preloaded images
	I0815 16:05:19.905933    1544 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 16:05:19.927442    1544 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0815 16:05:19.927470    1544 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 ...
	I0815 16:05:20.012986    1544 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4?checksum=md5:2dd98f97b896d7a4f012ee403b477cc8 -> /Users/jenkins/minikube-integration/19452-977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0815 16:05:27.355276    1544 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 ...
	I0815 16:05:27.355465    1544 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19452-977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 ...
	I0815 16:05:27.823153    1544 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 16:05:27.823400    1544 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/download-only-254000/config.json ...
	I0815 16:05:27.823424    1544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/download-only-254000/config.json: {Name:mkc8d75b3edfdbb103ed120dd5500eb0e1a779f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 16:05:27.823742    1544 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 16:05:27.823948    1544 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19452-977/.minikube/cache/darwin/amd64/v1.31.0/kubectl
	
	
	* The control-plane node download-only-254000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-254000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-254000
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestBinaryMirror (0.98s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-892000 --alsologtostderr --binary-mirror http://127.0.0.1:49627 --driver=hyperkit 
helpers_test.go:175: Cleaning up "binary-mirror-892000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-892000
--- PASS: TestBinaryMirror (0.98s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.17s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-640000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p addons-640000: exit status 85 (165.808022ms)

                                                
                                                
-- stdout --
	* Profile "addons-640000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-640000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.17s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.19s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-640000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons disable dashboard -p addons-640000: exit status 85 (186.504176ms)

                                                
                                                
-- stdout --
	* Profile "addons-640000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-640000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.19s)

                                                
                                    
x
+
TestAddons/Setup (203.47s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-640000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-darwin-amd64 start -p addons-640000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m23.474365573s)
--- PASS: TestAddons/Setup (203.47s)

                                                
                                    
x
+
TestAddons/serial/Volcano (39.64s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 12.231481ms
addons_test.go:897: volcano-scheduler stabilized in 12.376246ms
addons_test.go:905: volcano-admission stabilized in 12.456306ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-vb54g" [2651ba63-0080-4655-85ff-20db3374ea90] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.004067622s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-b6h99" [a2ee2c86-4f09-4ed9-98f3-71e4b69baca9] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.00451152s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-mtsjf" [4a92babc-0960-4eee-a5cc-9a4e864dacaa] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.00444333s
addons_test.go:932: (dbg) Run:  kubectl --context addons-640000 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-640000 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-640000 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [e266e516-e1b3-47ca-8fbd-dcee3a92eece] Pending
helpers_test.go:344: "test-job-nginx-0" [e266e516-e1b3-47ca-8fbd-dcee3a92eece] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [e266e516-e1b3-47ca-8fbd-dcee3a92eece] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.00491658s
addons_test.go:968: (dbg) Run:  out/minikube-darwin-amd64 -p addons-640000 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-darwin-amd64 -p addons-640000 addons disable volcano --alsologtostderr -v=1: (10.323740061s)
--- PASS: TestAddons/serial/Volcano (39.64s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.1s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-640000 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-640000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.10s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.54s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.82792ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-mxjkm" [4769dbbb-9672-4ff5-8e01-371a3255129d] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.00485747s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-cc2br" [2d922b87-8e73-4412-a5f8-a62d077dc9d7] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00367184s
addons_test.go:342: (dbg) Run:  kubectl --context addons-640000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-640000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-640000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.901822701s)
addons_test.go:361: (dbg) Run:  out/minikube-darwin-amd64 -p addons-640000 ip
addons_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 -p addons-640000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.54s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.41s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-640000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-640000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-640000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [897beefc-b693-424b-b8c5-cf4f7e3e3469] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [897beefc-b693-424b-b8c5-cf4f7e3e3469] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.005498188s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-amd64 -p addons-640000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-640000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-darwin-amd64 -p addons-640000 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.169.0.2
addons_test.go:308: (dbg) Run:  out/minikube-darwin-amd64 -p addons-640000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 -p addons-640000 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-darwin-amd64 -p addons-640000 addons disable ingress --alsologtostderr -v=1: (7.598901432s)
--- PASS: TestAddons/parallel/Ingress (19.41s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.54s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-dqfsl" [eb43d333-e84a-4ae7-8c76-fcf7154e2307] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003095194s
addons_test.go:851: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-640000
addons_test.go:851: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-640000: (5.540545261s)
--- PASS: TestAddons/parallel/InspektorGadget (10.54s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.57s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.703162ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-8988944d9-xtq6p" [0f223418-75c7-4d2a-9fdf-060b872eb9a3] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004556839s
addons_test.go:417: (dbg) Run:  kubectl --context addons-640000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-amd64 -p addons-640000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.57s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (9.92s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 1.643748ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-wb8zw" [63e024c4-c124-4915-ba74-e0bd8fbacadf] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.004562142s
addons_test.go:475: (dbg) Run:  kubectl --context addons-640000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-640000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.468197417s)
addons_test.go:492: (dbg) Run:  out/minikube-darwin-amd64 -p addons-640000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (9.92s)

                                                
                                    
x
+
TestAddons/parallel/CSI (41.77s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 3.74805ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-640000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-640000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-640000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-640000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-640000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-640000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-640000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [467b7779-1216-4afe-9215-5d0b8a42478b] Pending
helpers_test.go:344: "task-pv-pod" [467b7779-1216-4afe-9215-5d0b8a42478b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [467b7779-1216-4afe-9215-5d0b8a42478b] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.008269177s
addons_test.go:590: (dbg) Run:  kubectl --context addons-640000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-640000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-640000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-640000 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-640000 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-640000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-640000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-640000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
2024/08/15 16:10:09 [DEBUG] GET http://192.169.0.2:5000
helpers_test.go:394: (dbg) Run:  kubectl --context addons-640000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-640000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-640000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-640000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-640000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-640000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-640000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-640000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-640000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-640000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-640000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-640000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [54d2b500-e498-413c-af98-e397795b84d4] Pending
helpers_test.go:344: "task-pv-pod-restore" [54d2b500-e498-413c-af98-e397795b84d4] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [54d2b500-e498-413c-af98-e397795b84d4] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003745793s
addons_test.go:632: (dbg) Run:  kubectl --context addons-640000 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Done: kubectl --context addons-640000 delete pod task-pv-pod-restore: (1.054540339s)
addons_test.go:636: (dbg) Run:  kubectl --context addons-640000 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-640000 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-darwin-amd64 -p addons-640000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-darwin-amd64 -p addons-640000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.440683822s)
addons_test.go:648: (dbg) Run:  out/minikube-darwin-amd64 -p addons-640000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (41.77s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (41.22s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-640000 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-640000 --alsologtostderr -v=1: (1.022034956s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-zmmqh" [43bffbb3-6828-4a1b-97bf-9a6ddc301232] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-zmmqh" [43bffbb3-6828-4a1b-97bf-9a6ddc301232] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 40.00492967s
addons_test.go:839: (dbg) Run:  out/minikube-darwin-amd64 -p addons-640000 addons disable headlamp --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Headlamp (41.22s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.44s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-c4bc9b5f8-gs2rx" [1fb8fb47-884d-41dd-a41d-568cbfb46650] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003810407s
addons_test.go:870: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-640000
--- PASS: TestAddons/parallel/CloudSpanner (5.44s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.69s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-640000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-640000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-640000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-640000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-640000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-640000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-640000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-640000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [4469eb8c-6f69-40c0-9f37-580a43c52745] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [4469eb8c-6f69-40c0-9f37-580a43c52745] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [4469eb8c-6f69-40c0-9f37-580a43c52745] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.003528884s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-640000 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-darwin-amd64 -p addons-640000 ssh "cat /opt/local-path-provisioner/pvc-aff32d49-b97b-4383-bca8-245f1d42c23c_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-640000 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-640000 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-amd64 -p addons-640000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-darwin-amd64 -p addons-640000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.00998385s)
--- PASS: TestAddons/parallel/LocalPath (53.69s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.34s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-mqvs8" [05a253b6-3327-4476-bf69-444edf7dc82e] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004355618s
addons_test.go:1064: (dbg) Run:  out/minikube-darwin-amd64 addons disable nvidia-device-plugin -p addons-640000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.34s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.49s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-hvsw6" [4005c871-e0be-484a-b8d1-a3d4fecdd78a] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.00371489s
addons_test.go:1076: (dbg) Run:  out/minikube-darwin-amd64 -p addons-640000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-darwin-amd64 -p addons-640000 addons disable yakd --alsologtostderr -v=1: (5.484107407s)
--- PASS: TestAddons/parallel/Yakd (10.49s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (5.92s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-640000
addons_test.go:174: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-640000: (5.379650414s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-640000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-640000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-640000
--- PASS: TestAddons/StoppedEnableDisable (5.92s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (11.06s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (11.06s)

                                                
                                    
x
+
TestErrorSpam/setup (35.95s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-996000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-996000 --driver=hyperkit 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-996000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-996000 --driver=hyperkit : (35.952298538s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.0."
--- PASS: TestErrorSpam/setup (35.95s)

                                                
                                    
x
+
TestErrorSpam/start (1.76s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-996000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-996000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-996000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-996000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-996000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-996000 start --dry-run
--- PASS: TestErrorSpam/start (1.76s)

                                                
                                    
x
+
TestErrorSpam/status (0.53s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-996000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-996000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-996000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-996000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-996000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-996000 status
--- PASS: TestErrorSpam/status (0.53s)

                                                
                                    
x
+
TestErrorSpam/pause (1.41s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-996000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-996000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-996000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-996000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-996000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-996000 pause
--- PASS: TestErrorSpam/pause (1.41s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.44s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-996000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-996000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-996000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-996000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-996000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-996000 unpause
--- PASS: TestErrorSpam/unpause (1.44s)

                                                
                                    
x
+
TestErrorSpam/stop (106.8s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-996000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-996000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-996000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-996000 stop: (5.392620466s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-996000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-996000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-996000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-996000 stop: (26.165613208s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-996000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-996000 stop
E0815 16:13:57.557728    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/addons-640000/client.crt: no such file or directory" logger="UnhandledError"
E0815 16:13:57.567968    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/addons-640000/client.crt: no such file or directory" logger="UnhandledError"
E0815 16:13:57.579133    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/addons-640000/client.crt: no such file or directory" logger="UnhandledError"
E0815 16:13:57.600549    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/addons-640000/client.crt: no such file or directory" logger="UnhandledError"
E0815 16:13:57.642466    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/addons-640000/client.crt: no such file or directory" logger="UnhandledError"
E0815 16:13:57.724019    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/addons-640000/client.crt: no such file or directory" logger="UnhandledError"
E0815 16:13:57.885262    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/addons-640000/client.crt: no such file or directory" logger="UnhandledError"
E0815 16:13:58.206489    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/addons-640000/client.crt: no such file or directory" logger="UnhandledError"
E0815 16:13:58.849682    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/addons-640000/client.crt: no such file or directory" logger="UnhandledError"
E0815 16:14:00.130981    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/addons-640000/client.crt: no such file or directory" logger="UnhandledError"
E0815 16:14:02.692272    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/addons-640000/client.crt: no such file or directory" logger="UnhandledError"
E0815 16:14:07.813756    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/addons-640000/client.crt: no such file or directory" logger="UnhandledError"
E0815 16:14:18.054984    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/addons-640000/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-amd64 -p nospam-996000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-996000 stop: (1m15.239226735s)
--- PASS: TestErrorSpam/stop (106.80s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19452-977/.minikube/files/etc/test/nested/copy/1498/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (88.09s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-506000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit 
E0815 16:14:38.535878    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/addons-640000/client.crt: no such file or directory" logger="UnhandledError"
E0815 16:15:19.496389    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/addons-640000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-darwin-amd64 start -p functional-506000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit : (1m28.08602871s)
--- PASS: TestFunctional/serial/StartWithProxy (88.09s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (40.5s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-506000 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-darwin-amd64 start -p functional-506000 --alsologtostderr -v=8: (40.50401511s)
functional_test.go:663: soft start took 40.504462361s for "functional-506000" cluster.
--- PASS: TestFunctional/serial/SoftStart (40.50s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-506000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-darwin-amd64 -p functional-506000 cache add registry.k8s.io/pause:3.1: (1.172980904s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-darwin-amd64 -p functional-506000 cache add registry.k8s.io/pause:3.3: (1.07341498s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.35s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-506000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialCacheCmdcacheadd_local1414985142/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 cache add minikube-local-cache-test:functional-506000
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 cache delete minikube-local-cache-test:functional-506000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-506000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.35s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-506000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (143.524459ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 kubectl -- --context functional-506000 get pods
functional_test.go:716: (dbg) Done: out/minikube-darwin-amd64 -p functional-506000 kubectl -- --context functional-506000 get pods: (1.24857914s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (1.25s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.58s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-506000 get pods
E0815 16:16:41.517338    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/addons-640000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:741: (dbg) Done: out/kubectl --context functional-506000 get pods: (1.580297752s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.58s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (42.83s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-506000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-darwin-amd64 start -p functional-506000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.833286741s)
functional_test.go:761: restart took 42.833427402s for "functional-506000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (42.83s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-506000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.65s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 logs
functional_test.go:1236: (dbg) Done: out/minikube-darwin-amd64 -p functional-506000 logs: (2.649769367s)
--- PASS: TestFunctional/serial/LogsCmd (2.65s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.71s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 logs --file /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialLogsFileCmd1631449195/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-darwin-amd64 -p functional-506000 logs --file /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialLogsFileCmd1631449195/001/logs.txt: (2.705573347s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.71s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.38s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-506000 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-506000
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-506000: exit status 115 (267.676954ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|--------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |           URL            |
	|-----------|-------------|-------------|--------------------------|
	| default   | invalid-svc |          80 | http://192.169.0.4:31218 |
	|-----------|-------------|-------------|--------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-506000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.38s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-506000 config get cpus: exit status 14 (55.530882ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-506000 config get cpus: exit status 14 (56.076581ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-506000 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-506000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 3005: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.54s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-506000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-506000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (529.820934ms)

                                                
                                                
-- stdout --
	* [functional-506000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-977/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-977/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 16:18:30.620285    2959 out.go:345] Setting OutFile to fd 1 ...
	I0815 16:18:30.620541    2959 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:18:30.620546    2959 out.go:358] Setting ErrFile to fd 2...
	I0815 16:18:30.620549    2959 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:18:30.620717    2959 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-977/.minikube/bin
	I0815 16:18:30.622175    2959 out.go:352] Setting JSON to false
	I0815 16:18:30.644878    2959 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":1081,"bootTime":1723762829,"procs":482,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0815 16:18:30.644978    2959 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 16:18:30.666502    2959 out.go:177] * [functional-506000] minikube v1.33.1 on Darwin 14.6.1
	I0815 16:18:30.708383    2959 out.go:177]   - MINIKUBE_LOCATION=19452
	I0815 16:18:30.708449    2959 notify.go:220] Checking for updates...
	I0815 16:18:30.750984    2959 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19452-977/kubeconfig
	I0815 16:18:30.793030    2959 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0815 16:18:30.814204    2959 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 16:18:30.835451    2959 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-977/.minikube
	I0815 16:18:30.856169    2959 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 16:18:30.877922    2959 config.go:182] Loaded profile config "functional-506000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:18:30.878598    2959 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:18:30.878711    2959 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:18:30.888430    2959 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51050
	I0815 16:18:30.888811    2959 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:18:30.889242    2959 main.go:141] libmachine: Using API Version  1
	I0815 16:18:30.889253    2959 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:18:30.889535    2959 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:18:30.889664    2959 main.go:141] libmachine: (functional-506000) Calling .DriverName
	I0815 16:18:30.889861    2959 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 16:18:30.890121    2959 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:18:30.890145    2959 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:18:30.898666    2959 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51052
	I0815 16:18:30.899030    2959 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:18:30.899398    2959 main.go:141] libmachine: Using API Version  1
	I0815 16:18:30.899413    2959 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:18:30.899636    2959 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:18:30.899748    2959 main.go:141] libmachine: (functional-506000) Calling .DriverName
	I0815 16:18:30.929136    2959 out.go:177] * Using the hyperkit driver based on existing profile
	I0815 16:18:30.986976    2959 start.go:297] selected driver: hyperkit
	I0815 16:18:30.986997    2959 start.go:901] validating driver "hyperkit" against &{Name:functional-506000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.0 ClusterName:functional-506000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 16:18:30.987151    2959 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 16:18:31.011180    2959 out.go:201] 
	W0815 16:18:31.032133    2959 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0815 16:18:31.053004    2959 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-506000 --dry-run --alsologtostderr -v=1 --driver=hyperkit 
--- PASS: TestFunctional/parallel/DryRun (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-506000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-506000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (488.029941ms)

                                                
                                                
-- stdout --
	* [functional-506000] minikube v1.33.1 sur Darwin 14.6.1
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-977/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-977/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote hyperkit basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 16:18:31.595103    2977 out.go:345] Setting OutFile to fd 1 ...
	I0815 16:18:31.595258    2977 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:18:31.595263    2977 out.go:358] Setting ErrFile to fd 2...
	I0815 16:18:31.595266    2977 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:18:31.595450    2977 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-977/.minikube/bin
	I0815 16:18:31.597062    2977 out.go:352] Setting JSON to false
	I0815 16:18:31.620045    2977 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":1082,"bootTime":1723762829,"procs":483,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0815 16:18:31.620138    2977 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 16:18:31.641002    2977 out.go:177] * [functional-506000] minikube v1.33.1 sur Darwin 14.6.1
	I0815 16:18:31.681846    2977 notify.go:220] Checking for updates...
	I0815 16:18:31.704166    2977 out.go:177]   - MINIKUBE_LOCATION=19452
	I0815 16:18:31.726092    2977 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19452-977/kubeconfig
	I0815 16:18:31.746790    2977 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0815 16:18:31.767983    2977 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 16:18:31.789081    2977 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-977/.minikube
	I0815 16:18:31.809760    2977 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 16:18:31.832696    2977 config.go:182] Loaded profile config "functional-506000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:18:31.833420    2977 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:18:31.833511    2977 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:18:31.843161    2977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51060
	I0815 16:18:31.843534    2977 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:18:31.843994    2977 main.go:141] libmachine: Using API Version  1
	I0815 16:18:31.844019    2977 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:18:31.844272    2977 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:18:31.844398    2977 main.go:141] libmachine: (functional-506000) Calling .DriverName
	I0815 16:18:31.844586    2977 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 16:18:31.844843    2977 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:18:31.844867    2977 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:18:31.853467    2977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51062
	I0815 16:18:31.854041    2977 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:18:31.854356    2977 main.go:141] libmachine: Using API Version  1
	I0815 16:18:31.854376    2977 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:18:31.854567    2977 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:18:31.854706    2977 main.go:141] libmachine: (functional-506000) Calling .DriverName
	I0815 16:18:31.882774    2977 out.go:177] * Utilisation du pilote hyperkit basé sur le profil existant
	I0815 16:18:31.925056    2977 start.go:297] selected driver: hyperkit
	I0815 16:18:31.925083    2977 start.go:901] validating driver "hyperkit" against &{Name:functional-506000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.0 ClusterName:functional-506000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 16:18:31.925263    2977 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 16:18:31.949794    2977 out.go:201] 
	W0815 16:18:31.971220    2977 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0815 16:18:31.991808    2977 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 status
functional_test.go:860: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-506000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-506000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-2kbhd" [859a5f6e-a007-480b-a116-0a2d703363a8] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-2kbhd" [859a5f6e-a007-480b-a116-0a2d703363a8] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.003592615s
functional_test.go:1649: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.169.0.4:30257
functional_test.go:1675: http://192.169.0.4:30257: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-2kbhd

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.169.0.4:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.169.0.4:30257
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.58s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (28.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [db7d7e6b-4cf7-4ce5-be09-99e44cf43c41] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003953163s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-506000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-506000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-506000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-506000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [21a48e78-db48-4558-8a8b-ecdc8d47b634] Pending
helpers_test.go:344: "sp-pod" [21a48e78-db48-4558-8a8b-ecdc8d47b634] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [21a48e78-db48-4558-8a8b-ecdc8d47b634] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.00444293s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-506000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-506000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-506000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [18f78dea-2f35-472d-beb0-52b813023721] Pending
helpers_test.go:344: "sp-pod" [18f78dea-2f35-472d-beb0-52b813023721] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [18f78dea-2f35-472d-beb0-52b813023721] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.002927397s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-506000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (28.18s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 ssh -n functional-506000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 cp functional-506000:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelCpCmd772293448/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 ssh -n functional-506000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 ssh -n functional-506000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (22.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-506000 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-r2pzp" [b929c7d1-0cc7-4b91-9c82-7119e3c2faa0] Pending
helpers_test.go:344: "mysql-6cdb49bbb-r2pzp" [b929c7d1-0cc7-4b91-9c82-7119e3c2faa0] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-r2pzp" [b929c7d1-0cc7-4b91-9c82-7119e3c2faa0] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 21.00295489s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-506000 exec mysql-6cdb49bbb-r2pzp -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-506000 exec mysql-6cdb49bbb-r2pzp -- mysql -ppassword -e "show databases;": exit status 1 (167.191218ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-506000 exec mysql-6cdb49bbb-r2pzp -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (22.15s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1498/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 ssh "sudo cat /etc/test/nested/copy/1498/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1498.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 ssh "sudo cat /etc/ssl/certs/1498.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1498.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 ssh "sudo cat /usr/share/ca-certificates/1498.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/14982.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 ssh "sudo cat /etc/ssl/certs/14982.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/14982.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 ssh "sudo cat /usr/share/ca-certificates/14982.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-506000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-506000 ssh "sudo systemctl is-active crio": exit status 1 (160.547499ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-506000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-506000
docker.io/kicbase/echo-server:functional-506000
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-506000 image ls --format short --alsologtostderr:
I0815 16:18:33.926338    3011 out.go:345] Setting OutFile to fd 1 ...
I0815 16:18:33.935344    3011 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 16:18:33.935354    3011 out.go:358] Setting ErrFile to fd 2...
I0815 16:18:33.935360    3011 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 16:18:33.935596    3011 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-977/.minikube/bin
I0815 16:18:33.956992    3011 config.go:182] Loaded profile config "functional-506000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0815 16:18:33.957232    3011 config.go:182] Loaded profile config "functional-506000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0815 16:18:33.958014    3011 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0815 16:18:33.958112    3011 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0815 16:18:33.967460    3011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51115
I0815 16:18:33.967889    3011 main.go:141] libmachine: () Calling .GetVersion
I0815 16:18:33.968324    3011 main.go:141] libmachine: Using API Version  1
I0815 16:18:33.968355    3011 main.go:141] libmachine: () Calling .SetConfigRaw
I0815 16:18:33.968604    3011 main.go:141] libmachine: () Calling .GetMachineName
I0815 16:18:33.968730    3011 main.go:141] libmachine: (functional-506000) Calling .GetState
I0815 16:18:33.968815    3011 main.go:141] libmachine: (functional-506000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0815 16:18:33.968900    3011 main.go:141] libmachine: (functional-506000) DBG | hyperkit pid from json: 2234
I0815 16:18:33.970276    3011 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0815 16:18:33.970306    3011 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0815 16:18:33.978972    3011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51117
I0815 16:18:33.979332    3011 main.go:141] libmachine: () Calling .GetVersion
I0815 16:18:33.979678    3011 main.go:141] libmachine: Using API Version  1
I0815 16:18:33.979690    3011 main.go:141] libmachine: () Calling .SetConfigRaw
I0815 16:18:33.979891    3011 main.go:141] libmachine: () Calling .GetMachineName
I0815 16:18:33.980000    3011 main.go:141] libmachine: (functional-506000) Calling .DriverName
I0815 16:18:33.980160    3011 ssh_runner.go:195] Run: systemctl --version
I0815 16:18:33.980178    3011 main.go:141] libmachine: (functional-506000) Calling .GetSSHHostname
I0815 16:18:33.980255    3011 main.go:141] libmachine: (functional-506000) Calling .GetSSHPort
I0815 16:18:33.980376    3011 main.go:141] libmachine: (functional-506000) Calling .GetSSHKeyPath
I0815 16:18:33.980454    3011 main.go:141] libmachine: (functional-506000) Calling .GetSSHUsername
I0815 16:18:33.980533    3011 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/functional-506000/id_rsa Username:docker}
I0815 16:18:34.018043    3011 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0815 16:18:34.040040    3011 main.go:141] libmachine: Making call to close driver server
I0815 16:18:34.040051    3011 main.go:141] libmachine: (functional-506000) Calling .Close
I0815 16:18:34.040205    3011 main.go:141] libmachine: Successfully made call to close driver server
I0815 16:18:34.040217    3011 main.go:141] libmachine: Making call to close connection to plugin binary
I0815 16:18:34.040221    3011 main.go:141] libmachine: (functional-506000) DBG | Closing plugin on server side
I0815 16:18:34.040224    3011 main.go:141] libmachine: Making call to close driver server
I0815 16:18:34.040248    3011 main.go:141] libmachine: (functional-506000) Calling .Close
I0815 16:18:34.040382    3011 main.go:141] libmachine: (functional-506000) DBG | Closing plugin on server side
I0815 16:18:34.040409    3011 main.go:141] libmachine: Successfully made call to close driver server
I0815 16:18:34.040418    3011 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-506000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| localhost/my-image                          | functional-506000 | ce89c1285d7bb | 1.24MB |
| docker.io/library/minikube-local-cache-test | functional-506000 | 48f7799e9b062 | 30B    |
| registry.k8s.io/kube-controller-manager     | v1.31.0           | 045733566833c | 88.4MB |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| docker.io/kicbase/echo-server               | functional-506000 | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/kube-scheduler              | v1.31.0           | 1766f54c897f0 | 67.4MB |
| registry.k8s.io/kube-apiserver              | v1.31.0           | 604f5db92eaa8 | 94.2MB |
| docker.io/library/nginx                     | alpine            | 1ae23480369fa | 43.2MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/library/nginx                     | latest            | 5ef79149e0ec8 | 188MB  |
| registry.k8s.io/etcd                        | 3.5.15-0          | 2e96e5913fc06 | 148MB  |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/kube-proxy                  | v1.31.0           | ad83b2ca7b09e | 91.5MB |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-506000 image ls --format table --alsologtostderr:
I0815 16:18:36.736463    3036 out.go:345] Setting OutFile to fd 1 ...
I0815 16:18:36.736788    3036 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 16:18:36.736793    3036 out.go:358] Setting ErrFile to fd 2...
I0815 16:18:36.736797    3036 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 16:18:36.736993    3036 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-977/.minikube/bin
I0815 16:18:36.737745    3036 config.go:182] Loaded profile config "functional-506000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0815 16:18:36.737853    3036 config.go:182] Loaded profile config "functional-506000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0815 16:18:36.738191    3036 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0815 16:18:36.738244    3036 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0815 16:18:36.746805    3036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51152
I0815 16:18:36.747229    3036 main.go:141] libmachine: () Calling .GetVersion
I0815 16:18:36.747640    3036 main.go:141] libmachine: Using API Version  1
I0815 16:18:36.747662    3036 main.go:141] libmachine: () Calling .SetConfigRaw
I0815 16:18:36.747886    3036 main.go:141] libmachine: () Calling .GetMachineName
I0815 16:18:36.748005    3036 main.go:141] libmachine: (functional-506000) Calling .GetState
I0815 16:18:36.748097    3036 main.go:141] libmachine: (functional-506000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0815 16:18:36.748154    3036 main.go:141] libmachine: (functional-506000) DBG | hyperkit pid from json: 2234
I0815 16:18:36.749439    3036 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0815 16:18:36.749459    3036 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0815 16:18:36.757817    3036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51154
I0815 16:18:36.758175    3036 main.go:141] libmachine: () Calling .GetVersion
I0815 16:18:36.758517    3036 main.go:141] libmachine: Using API Version  1
I0815 16:18:36.758533    3036 main.go:141] libmachine: () Calling .SetConfigRaw
I0815 16:18:36.758803    3036 main.go:141] libmachine: () Calling .GetMachineName
I0815 16:18:36.758920    3036 main.go:141] libmachine: (functional-506000) Calling .DriverName
I0815 16:18:36.759084    3036 ssh_runner.go:195] Run: systemctl --version
I0815 16:18:36.759106    3036 main.go:141] libmachine: (functional-506000) Calling .GetSSHHostname
I0815 16:18:36.759192    3036 main.go:141] libmachine: (functional-506000) Calling .GetSSHPort
I0815 16:18:36.759271    3036 main.go:141] libmachine: (functional-506000) Calling .GetSSHKeyPath
I0815 16:18:36.759350    3036 main.go:141] libmachine: (functional-506000) Calling .GetSSHUsername
I0815 16:18:36.759430    3036 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/functional-506000/id_rsa Username:docker}
I0815 16:18:36.794593    3036 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0815 16:18:36.813038    3036 main.go:141] libmachine: Making call to close driver server
I0815 16:18:36.813049    3036 main.go:141] libmachine: (functional-506000) Calling .Close
I0815 16:18:36.813192    3036 main.go:141] libmachine: Successfully made call to close driver server
I0815 16:18:36.813205    3036 main.go:141] libmachine: Making call to close connection to plugin binary
I0815 16:18:36.813210    3036 main.go:141] libmachine: Making call to close driver server
I0815 16:18:36.813212    3036 main.go:141] libmachine: (functional-506000) DBG | Closing plugin on server side
I0815 16:18:36.813216    3036 main.go:141] libmachine: (functional-506000) Calling .Close
I0815 16:18:36.813344    3036 main.go:141] libmachine: (functional-506000) DBG | Closing plugin on server side
I0815 16:18:36.813362    3036 main.go:141] libmachine: Successfully made call to close driver server
I0815 16:18:36.813370    3036 main.go:141] libmachine: Making call to close connection to plugin binary
2024/08/15 16:18:44 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-506000 image ls --format json --alsologtostderr:
[{"id":"1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43200000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3","rep
oDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"94200000"},{"id":"045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"88400000"},{"id":"ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"91500000"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"148000000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"ce89c1285d7bb58467d61d112eb7955514d9826d3043c6ab69da808b0cba6192","repoDigests":[],"repoTags":["localhost/my-image:functional-506000"],"size"
:"1240000"},{"id":"5ef79149e0ec84a7a9f9284c3f91aa3c20608f8391f5445eabe92ef07dbda03c","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"67400000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-506000"],"size":"4940000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"48f7799e9b06226331fa46c13f33b7a275ea207b5913c033cf4789c2021f149f","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-506000"],"size":"30"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"
},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-506000 image ls --format json --alsologtostderr:
I0815 16:18:36.571735    3032 out.go:345] Setting OutFile to fd 1 ...
I0815 16:18:36.571934    3032 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 16:18:36.571939    3032 out.go:358] Setting ErrFile to fd 2...
I0815 16:18:36.571943    3032 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 16:18:36.572121    3032 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-977/.minikube/bin
I0815 16:18:36.573659    3032 config.go:182] Loaded profile config "functional-506000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0815 16:18:36.573768    3032 config.go:182] Loaded profile config "functional-506000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0815 16:18:36.574100    3032 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0815 16:18:36.574150    3032 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0815 16:18:36.582449    3032 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51147
I0815 16:18:36.582889    3032 main.go:141] libmachine: () Calling .GetVersion
I0815 16:18:36.583316    3032 main.go:141] libmachine: Using API Version  1
I0815 16:18:36.583325    3032 main.go:141] libmachine: () Calling .SetConfigRaw
I0815 16:18:36.583548    3032 main.go:141] libmachine: () Calling .GetMachineName
I0815 16:18:36.583657    3032 main.go:141] libmachine: (functional-506000) Calling .GetState
I0815 16:18:36.583749    3032 main.go:141] libmachine: (functional-506000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0815 16:18:36.583822    3032 main.go:141] libmachine: (functional-506000) DBG | hyperkit pid from json: 2234
I0815 16:18:36.585066    3032 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0815 16:18:36.585086    3032 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0815 16:18:36.593455    3032 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51149
I0815 16:18:36.593798    3032 main.go:141] libmachine: () Calling .GetVersion
I0815 16:18:36.594134    3032 main.go:141] libmachine: Using API Version  1
I0815 16:18:36.594150    3032 main.go:141] libmachine: () Calling .SetConfigRaw
I0815 16:18:36.594371    3032 main.go:141] libmachine: () Calling .GetMachineName
I0815 16:18:36.594484    3032 main.go:141] libmachine: (functional-506000) Calling .DriverName
I0815 16:18:36.594647    3032 ssh_runner.go:195] Run: systemctl --version
I0815 16:18:36.594666    3032 main.go:141] libmachine: (functional-506000) Calling .GetSSHHostname
I0815 16:18:36.594748    3032 main.go:141] libmachine: (functional-506000) Calling .GetSSHPort
I0815 16:18:36.594821    3032 main.go:141] libmachine: (functional-506000) Calling .GetSSHKeyPath
I0815 16:18:36.594897    3032 main.go:141] libmachine: (functional-506000) Calling .GetSSHUsername
I0815 16:18:36.594982    3032 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/functional-506000/id_rsa Username:docker}
I0815 16:18:36.629634    3032 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0815 16:18:36.654365    3032 main.go:141] libmachine: Making call to close driver server
I0815 16:18:36.654373    3032 main.go:141] libmachine: (functional-506000) Calling .Close
I0815 16:18:36.654522    3032 main.go:141] libmachine: Successfully made call to close driver server
I0815 16:18:36.654530    3032 main.go:141] libmachine: Making call to close connection to plugin binary
I0815 16:18:36.654535    3032 main.go:141] libmachine: Making call to close driver server
I0815 16:18:36.654540    3032 main.go:141] libmachine: (functional-506000) Calling .Close
I0815 16:18:36.654591    3032 main.go:141] libmachine: (functional-506000) DBG | Closing plugin on server side
I0815 16:18:36.654663    3032 main.go:141] libmachine: Successfully made call to close driver server
I0815 16:18:36.654673    3032 main.go:141] libmachine: Making call to close connection to plugin binary
I0815 16:18:36.654679    3032 main.go:141] libmachine: (functional-506000) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-506000 image ls --format yaml --alsologtostderr:
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-506000
size: "4940000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 48f7799e9b06226331fa46c13f33b7a275ea207b5913c033cf4789c2021f149f
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-506000
size: "30"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "148000000"
- id: 1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43200000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 5ef79149e0ec84a7a9f9284c3f91aa3c20608f8391f5445eabe92ef07dbda03c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "91500000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "67400000"
- id: 045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "88400000"
- id: 604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "94200000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-506000 image ls --format yaml --alsologtostderr:
I0815 16:18:34.122282    3015 out.go:345] Setting OutFile to fd 1 ...
I0815 16:18:34.122478    3015 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 16:18:34.122484    3015 out.go:358] Setting ErrFile to fd 2...
I0815 16:18:34.122487    3015 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 16:18:34.122670    3015 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-977/.minikube/bin
I0815 16:18:34.123270    3015 config.go:182] Loaded profile config "functional-506000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0815 16:18:34.123370    3015 config.go:182] Loaded profile config "functional-506000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0815 16:18:34.123721    3015 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0815 16:18:34.123764    3015 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0815 16:18:34.132044    3015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51121
I0815 16:18:34.132441    3015 main.go:141] libmachine: () Calling .GetVersion
I0815 16:18:34.132873    3015 main.go:141] libmachine: Using API Version  1
I0815 16:18:34.132883    3015 main.go:141] libmachine: () Calling .SetConfigRaw
I0815 16:18:34.133099    3015 main.go:141] libmachine: () Calling .GetMachineName
I0815 16:18:34.133220    3015 main.go:141] libmachine: (functional-506000) Calling .GetState
I0815 16:18:34.133301    3015 main.go:141] libmachine: (functional-506000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0815 16:18:34.133373    3015 main.go:141] libmachine: (functional-506000) DBG | hyperkit pid from json: 2234
I0815 16:18:34.134633    3015 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0815 16:18:34.134666    3015 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0815 16:18:34.143064    3015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51123
I0815 16:18:34.143409    3015 main.go:141] libmachine: () Calling .GetVersion
I0815 16:18:34.143752    3015 main.go:141] libmachine: Using API Version  1
I0815 16:18:34.143767    3015 main.go:141] libmachine: () Calling .SetConfigRaw
I0815 16:18:34.144014    3015 main.go:141] libmachine: () Calling .GetMachineName
I0815 16:18:34.144132    3015 main.go:141] libmachine: (functional-506000) Calling .DriverName
I0815 16:18:34.144272    3015 ssh_runner.go:195] Run: systemctl --version
I0815 16:18:34.144297    3015 main.go:141] libmachine: (functional-506000) Calling .GetSSHHostname
I0815 16:18:34.144372    3015 main.go:141] libmachine: (functional-506000) Calling .GetSSHPort
I0815 16:18:34.144448    3015 main.go:141] libmachine: (functional-506000) Calling .GetSSHKeyPath
I0815 16:18:34.144526    3015 main.go:141] libmachine: (functional-506000) Calling .GetSSHUsername
I0815 16:18:34.144606    3015 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/functional-506000/id_rsa Username:docker}
I0815 16:18:34.180851    3015 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0815 16:18:34.200173    3015 main.go:141] libmachine: Making call to close driver server
I0815 16:18:34.200181    3015 main.go:141] libmachine: (functional-506000) Calling .Close
I0815 16:18:34.200314    3015 main.go:141] libmachine: (functional-506000) DBG | Closing plugin on server side
I0815 16:18:34.200339    3015 main.go:141] libmachine: Successfully made call to close driver server
I0815 16:18:34.200347    3015 main.go:141] libmachine: Making call to close connection to plugin binary
I0815 16:18:34.200356    3015 main.go:141] libmachine: Making call to close driver server
I0815 16:18:34.200361    3015 main.go:141] libmachine: (functional-506000) Calling .Close
I0815 16:18:34.200527    3015 main.go:141] libmachine: (functional-506000) DBG | Closing plugin on server side
I0815 16:18:34.200574    3015 main.go:141] libmachine: Successfully made call to close driver server
I0815 16:18:34.200597    3015 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-506000 ssh pgrep buildkitd: exit status 1 (128.177255ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 image build -t localhost/my-image:functional-506000 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-darwin-amd64 -p functional-506000 image build -t localhost/my-image:functional-506000 testdata/build --alsologtostderr: (2.010024705s)
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-506000 image build -t localhost/my-image:functional-506000 testdata/build --alsologtostderr:
I0815 16:18:34.410369    3024 out.go:345] Setting OutFile to fd 1 ...
I0815 16:18:34.410646    3024 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 16:18:34.410652    3024 out.go:358] Setting ErrFile to fd 2...
I0815 16:18:34.410655    3024 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 16:18:34.410834    3024 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-977/.minikube/bin
I0815 16:18:34.411397    3024 config.go:182] Loaded profile config "functional-506000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0815 16:18:34.412542    3024 config.go:182] Loaded profile config "functional-506000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0815 16:18:34.412918    3024 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0815 16:18:34.412958    3024 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0815 16:18:34.421518    3024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51134
I0815 16:18:34.421941    3024 main.go:141] libmachine: () Calling .GetVersion
I0815 16:18:34.422366    3024 main.go:141] libmachine: Using API Version  1
I0815 16:18:34.422375    3024 main.go:141] libmachine: () Calling .SetConfigRaw
I0815 16:18:34.422575    3024 main.go:141] libmachine: () Calling .GetMachineName
I0815 16:18:34.422744    3024 main.go:141] libmachine: (functional-506000) Calling .GetState
I0815 16:18:34.422852    3024 main.go:141] libmachine: (functional-506000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0815 16:18:34.422923    3024 main.go:141] libmachine: (functional-506000) DBG | hyperkit pid from json: 2234
I0815 16:18:34.424227    3024 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0815 16:18:34.424254    3024 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0815 16:18:34.433044    3024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51136
I0815 16:18:34.433395    3024 main.go:141] libmachine: () Calling .GetVersion
I0815 16:18:34.433789    3024 main.go:141] libmachine: Using API Version  1
I0815 16:18:34.433814    3024 main.go:141] libmachine: () Calling .SetConfigRaw
I0815 16:18:34.434031    3024 main.go:141] libmachine: () Calling .GetMachineName
I0815 16:18:34.434144    3024 main.go:141] libmachine: (functional-506000) Calling .DriverName
I0815 16:18:34.434310    3024 ssh_runner.go:195] Run: systemctl --version
I0815 16:18:34.434332    3024 main.go:141] libmachine: (functional-506000) Calling .GetSSHHostname
I0815 16:18:34.434413    3024 main.go:141] libmachine: (functional-506000) Calling .GetSSHPort
I0815 16:18:34.434493    3024 main.go:141] libmachine: (functional-506000) Calling .GetSSHKeyPath
I0815 16:18:34.434569    3024 main.go:141] libmachine: (functional-506000) Calling .GetSSHUsername
I0815 16:18:34.434648    3024 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/functional-506000/id_rsa Username:docker}
I0815 16:18:34.469504    3024 build_images.go:161] Building image from path: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.1899950010.tar
I0815 16:18:34.469595    3024 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0815 16:18:34.478616    3024 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1899950010.tar
I0815 16:18:34.482125    3024 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1899950010.tar: stat -c "%s %y" /var/lib/minikube/build/build.1899950010.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1899950010.tar': No such file or directory
I0815 16:18:34.482174    3024 ssh_runner.go:362] scp /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.1899950010.tar --> /var/lib/minikube/build/build.1899950010.tar (3072 bytes)
I0815 16:18:34.509183    3024 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1899950010
I0815 16:18:34.518112    3024 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1899950010 -xf /var/lib/minikube/build/build.1899950010.tar
I0815 16:18:34.527024    3024 docker.go:360] Building image: /var/lib/minikube/build/build.1899950010
I0815 16:18:34.527100    3024 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-506000 /var/lib/minikube/build/build.1899950010
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.0s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.4s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers done
#8 writing image sha256:ce89c1285d7bb58467d61d112eb7955514d9826d3043c6ab69da808b0cba6192 done
#8 naming to localhost/my-image:functional-506000 done
#8 DONE 0.0s
I0815 16:18:36.311421    3024 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-506000 /var/lib/minikube/build/build.1899950010: (1.784314474s)
I0815 16:18:36.311476    3024 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1899950010
I0815 16:18:36.320128    3024 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1899950010.tar
I0815 16:18:36.327352    3024 build_images.go:217] Built localhost/my-image:functional-506000 from /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.1899950010.tar
I0815 16:18:36.327374    3024 build_images.go:133] succeeded building to: functional-506000
I0815 16:18:36.327379    3024 build_images.go:134] failed building to: 
I0815 16:18:36.327393    3024 main.go:141] libmachine: Making call to close driver server
I0815 16:18:36.327400    3024 main.go:141] libmachine: (functional-506000) Calling .Close
I0815 16:18:36.327542    3024 main.go:141] libmachine: Successfully made call to close driver server
I0815 16:18:36.327553    3024 main.go:141] libmachine: Making call to close connection to plugin binary
I0815 16:18:36.327561    3024 main.go:141] libmachine: Making call to close driver server
I0815 16:18:36.327562    3024 main.go:141] libmachine: (functional-506000) DBG | Closing plugin on server side
I0815 16:18:36.327567    3024 main.go:141] libmachine: (functional-506000) Calling .Close
I0815 16:18:36.327710    3024 main.go:141] libmachine: Successfully made call to close driver server
I0815 16:18:36.327717    3024 main.go:141] libmachine: Making call to close connection to plugin binary
I0815 16:18:36.327726    3024 main.go:141] libmachine: (functional-506000) DBG | Closing plugin on server side
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.815829765s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-506000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.86s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-506000 docker-env) && out/minikube-darwin-amd64 status -p functional-506000"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-506000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 image load --daemon kicbase/echo-server:functional-506000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 image load --daemon kicbase/echo-server:functional-506000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-506000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 image load --daemon kicbase/echo-server:functional-506000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 image save kicbase/echo-server:functional-506000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 image rm kicbase/echo-server:functional-506000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-506000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 image save --daemon kicbase/echo-server:functional-506000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-506000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (20.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-506000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-506000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-v6qsr" [98c0d866-fff4-49c3-8ea2-ae1acc95331f] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-v6qsr" [98c0d866-fff4-49c3-8ea2-ae1acc95331f] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 20.003190103s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (20.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-506000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-506000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-506000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-506000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2707: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-506000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-506000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [c4602960-ad2f-4642-a905-e5fce96051f5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [c4602960-ad2f-4642-a905-e5fce96051f5] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.003678259s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 service list -o json
functional_test.go:1494: Took "372.073209ms" to run "out/minikube-darwin-amd64 -p functional-506000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.169.0.4:30377
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.169.0.4:30377
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-506000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.99.10.159 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-506000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1315: Took "180.491791ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1329: Took "81.012866ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1366: Took "181.446582ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1379: Took "80.899045ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-506000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port2790096046/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1723763901070828000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port2790096046/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1723763901070828000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port2790096046/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1723763901070828000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port2790096046/001/test-1723763901070828000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-506000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (153.777525ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 15 23:18 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 15 23:18 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 15 23:18 test-1723763901070828000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 ssh cat /mount-9p/test-1723763901070828000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-506000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [a5b4f071-7e5f-4535-902a-0357444ee9a1] Pending
helpers_test.go:344: "busybox-mount" [a5b4f071-7e5f-4535-902a-0357444ee9a1] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [a5b4f071-7e5f-4535-902a-0357444ee9a1] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [a5b4f071-7e5f-4535-902a-0357444ee9a1] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.005007167s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-506000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-506000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port2790096046/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.23s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-506000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port1517015227/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-506000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (155.844691ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-506000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port1517015227/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-506000 ssh "sudo umount -f /mount-9p": exit status 1 (128.674841ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-506000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-506000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port1517015227/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-506000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3054460142/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-506000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3054460142/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-506000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3054460142/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-506000 ssh "findmnt -T" /mount1: exit status 1 (159.237444ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-506000 ssh "findmnt -T" /mount1: exit status 1 (208.244571ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-506000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-506000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-506000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3054460142/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-506000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3054460142/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-506000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3054460142/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.92s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-506000
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-506000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-506000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (198.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-138000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperkit 
E0815 16:18:57.655653    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/addons-640000/client.crt: no such file or directory" logger="UnhandledError"
E0815 16:19:25.359859    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/addons-640000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p ha-138000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperkit : (3m17.816755645s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (198.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-138000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-138000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-amd64 kubectl -p ha-138000 -- rollout status deployment/busybox: (3.019443772s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-138000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-138000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-138000 -- exec busybox-7dff88458-s6zqd -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-138000 -- exec busybox-7dff88458-t5sdh -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-138000 -- exec busybox-7dff88458-wgww9 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-138000 -- exec busybox-7dff88458-s6zqd -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-138000 -- exec busybox-7dff88458-t5sdh -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-138000 -- exec busybox-7dff88458-wgww9 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-138000 -- exec busybox-7dff88458-s6zqd -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-138000 -- exec busybox-7dff88458-t5sdh -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-138000 -- exec busybox-7dff88458-wgww9 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-138000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-138000 -- exec busybox-7dff88458-s6zqd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-138000 -- exec busybox-7dff88458-s6zqd -- sh -c "ping -c 1 192.169.0.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-138000 -- exec busybox-7dff88458-t5sdh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-138000 -- exec busybox-7dff88458-t5sdh -- sh -c "ping -c 1 192.169.0.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-138000 -- exec busybox-7dff88458-wgww9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-138000 -- exec busybox-7dff88458-wgww9 -- sh -c "ping -c 1 192.169.0.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (49.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-138000 -v=7 --alsologtostderr
E0815 16:22:37.837765    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/functional-506000/client.crt: no such file or directory" logger="UnhandledError"
E0815 16:22:37.845127    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/functional-506000/client.crt: no such file or directory" logger="UnhandledError"
E0815 16:22:37.857222    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/functional-506000/client.crt: no such file or directory" logger="UnhandledError"
E0815 16:22:37.879946    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/functional-506000/client.crt: no such file or directory" logger="UnhandledError"
E0815 16:22:37.922393    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/functional-506000/client.crt: no such file or directory" logger="UnhandledError"
E0815 16:22:38.003660    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/functional-506000/client.crt: no such file or directory" logger="UnhandledError"
E0815 16:22:38.164991    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/functional-506000/client.crt: no such file or directory" logger="UnhandledError"
E0815 16:22:38.486686    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/functional-506000/client.crt: no such file or directory" logger="UnhandledError"
E0815 16:22:39.128378    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/functional-506000/client.crt: no such file or directory" logger="UnhandledError"
E0815 16:22:40.410142    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/functional-506000/client.crt: no such file or directory" logger="UnhandledError"
E0815 16:22:42.971447    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/functional-506000/client.crt: no such file or directory" logger="UnhandledError"
E0815 16:22:48.093953    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/functional-506000/client.crt: no such file or directory" logger="UnhandledError"
E0815 16:22:58.335923    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/functional-506000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-138000 -v=7 --alsologtostderr: (49.32360572s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (49.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-138000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (9.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 cp testdata/cp-test.txt ha-138000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 ssh -n ha-138000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 cp ha-138000:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile1119105544/001/cp-test_ha-138000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 ssh -n ha-138000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 cp ha-138000:/home/docker/cp-test.txt ha-138000-m02:/home/docker/cp-test_ha-138000_ha-138000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 ssh -n ha-138000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 ssh -n ha-138000-m02 "sudo cat /home/docker/cp-test_ha-138000_ha-138000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 cp ha-138000:/home/docker/cp-test.txt ha-138000-m03:/home/docker/cp-test_ha-138000_ha-138000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 ssh -n ha-138000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 ssh -n ha-138000-m03 "sudo cat /home/docker/cp-test_ha-138000_ha-138000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 cp ha-138000:/home/docker/cp-test.txt ha-138000-m04:/home/docker/cp-test_ha-138000_ha-138000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 ssh -n ha-138000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 ssh -n ha-138000-m04 "sudo cat /home/docker/cp-test_ha-138000_ha-138000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 cp testdata/cp-test.txt ha-138000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 ssh -n ha-138000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 cp ha-138000-m02:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile1119105544/001/cp-test_ha-138000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 ssh -n ha-138000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 cp ha-138000-m02:/home/docker/cp-test.txt ha-138000:/home/docker/cp-test_ha-138000-m02_ha-138000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 ssh -n ha-138000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 ssh -n ha-138000 "sudo cat /home/docker/cp-test_ha-138000-m02_ha-138000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 cp ha-138000-m02:/home/docker/cp-test.txt ha-138000-m03:/home/docker/cp-test_ha-138000-m02_ha-138000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 ssh -n ha-138000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 ssh -n ha-138000-m03 "sudo cat /home/docker/cp-test_ha-138000-m02_ha-138000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 cp ha-138000-m02:/home/docker/cp-test.txt ha-138000-m04:/home/docker/cp-test_ha-138000-m02_ha-138000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 ssh -n ha-138000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 ssh -n ha-138000-m04 "sudo cat /home/docker/cp-test_ha-138000-m02_ha-138000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 cp testdata/cp-test.txt ha-138000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 ssh -n ha-138000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 cp ha-138000-m03:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile1119105544/001/cp-test_ha-138000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 ssh -n ha-138000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 cp ha-138000-m03:/home/docker/cp-test.txt ha-138000:/home/docker/cp-test_ha-138000-m03_ha-138000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 ssh -n ha-138000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 ssh -n ha-138000 "sudo cat /home/docker/cp-test_ha-138000-m03_ha-138000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 cp ha-138000-m03:/home/docker/cp-test.txt ha-138000-m02:/home/docker/cp-test_ha-138000-m03_ha-138000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 ssh -n ha-138000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 ssh -n ha-138000-m02 "sudo cat /home/docker/cp-test_ha-138000-m03_ha-138000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 cp ha-138000-m03:/home/docker/cp-test.txt ha-138000-m04:/home/docker/cp-test_ha-138000-m03_ha-138000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 ssh -n ha-138000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 ssh -n ha-138000-m04 "sudo cat /home/docker/cp-test_ha-138000-m03_ha-138000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 cp testdata/cp-test.txt ha-138000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 ssh -n ha-138000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 cp ha-138000-m04:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile1119105544/001/cp-test_ha-138000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 ssh -n ha-138000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 cp ha-138000-m04:/home/docker/cp-test.txt ha-138000:/home/docker/cp-test_ha-138000-m04_ha-138000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 ssh -n ha-138000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 ssh -n ha-138000 "sudo cat /home/docker/cp-test_ha-138000-m04_ha-138000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 cp ha-138000-m04:/home/docker/cp-test.txt ha-138000-m02:/home/docker/cp-test_ha-138000-m04_ha-138000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 ssh -n ha-138000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 ssh -n ha-138000-m02 "sudo cat /home/docker/cp-test_ha-138000-m04_ha-138000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 cp ha-138000-m04:/home/docker/cp-test.txt ha-138000-m03:/home/docker/cp-test_ha-138000-m04_ha-138000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 ssh -n ha-138000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 ssh -n ha-138000-m03 "sudo cat /home/docker/cp-test_ha-138000-m04_ha-138000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (9.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (8.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 node stop m02 -v=7 --alsologtostderr
E0815 16:23:18.817181    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/functional-506000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-darwin-amd64 -p ha-138000 node stop m02 -v=7 --alsologtostderr: (8.344936188s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-138000 status -v=7 --alsologtostderr: exit status 7 (358.43871ms)

                                                
                                                
-- stdout --
	ha-138000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-138000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-138000-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-138000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 16:23:22.674718    3576 out.go:345] Setting OutFile to fd 1 ...
	I0815 16:23:22.674998    3576 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:23:22.675003    3576 out.go:358] Setting ErrFile to fd 2...
	I0815 16:23:22.675007    3576 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:23:22.675177    3576 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-977/.minikube/bin
	I0815 16:23:22.675347    3576 out.go:352] Setting JSON to false
	I0815 16:23:22.675367    3576 mustload.go:65] Loading cluster: ha-138000
	I0815 16:23:22.675397    3576 notify.go:220] Checking for updates...
	I0815 16:23:22.675646    3576 config.go:182] Loaded profile config "ha-138000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:23:22.675664    3576 status.go:255] checking status of ha-138000 ...
	I0815 16:23:22.676055    3576 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:23:22.676094    3576 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:23:22.684938    3576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51888
	I0815 16:23:22.685392    3576 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:23:22.685826    3576 main.go:141] libmachine: Using API Version  1
	I0815 16:23:22.685844    3576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:23:22.686066    3576 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:23:22.686182    3576 main.go:141] libmachine: (ha-138000) Calling .GetState
	I0815 16:23:22.686291    3576 main.go:141] libmachine: (ha-138000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:23:22.686365    3576 main.go:141] libmachine: (ha-138000) DBG | hyperkit pid from json: 3071
	I0815 16:23:22.687380    3576 status.go:330] ha-138000 host status = "Running" (err=<nil>)
	I0815 16:23:22.687396    3576 host.go:66] Checking if "ha-138000" exists ...
	I0815 16:23:22.687653    3576 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:23:22.687675    3576 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:23:22.696055    3576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51890
	I0815 16:23:22.696413    3576 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:23:22.696745    3576 main.go:141] libmachine: Using API Version  1
	I0815 16:23:22.696754    3576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:23:22.696997    3576 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:23:22.697110    3576 main.go:141] libmachine: (ha-138000) Calling .GetIP
	I0815 16:23:22.697176    3576 host.go:66] Checking if "ha-138000" exists ...
	I0815 16:23:22.697427    3576 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:23:22.697450    3576 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:23:22.708079    3576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51892
	I0815 16:23:22.708478    3576 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:23:22.708822    3576 main.go:141] libmachine: Using API Version  1
	I0815 16:23:22.708844    3576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:23:22.709036    3576 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:23:22.709146    3576 main.go:141] libmachine: (ha-138000) Calling .DriverName
	I0815 16:23:22.709272    3576 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 16:23:22.709293    3576 main.go:141] libmachine: (ha-138000) Calling .GetSSHHostname
	I0815 16:23:22.709371    3576 main.go:141] libmachine: (ha-138000) Calling .GetSSHPort
	I0815 16:23:22.709450    3576 main.go:141] libmachine: (ha-138000) Calling .GetSSHKeyPath
	I0815 16:23:22.709527    3576 main.go:141] libmachine: (ha-138000) Calling .GetSSHUsername
	I0815 16:23:22.709607    3576 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000/id_rsa Username:docker}
	I0815 16:23:22.744054    3576 ssh_runner.go:195] Run: systemctl --version
	I0815 16:23:22.748221    3576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 16:23:22.759107    3576 kubeconfig.go:125] found "ha-138000" server: "https://192.169.0.254:8443"
	I0815 16:23:22.759129    3576 api_server.go:166] Checking apiserver status ...
	I0815 16:23:22.759170    3576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 16:23:22.772310    3576 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1958/cgroup
	W0815 16:23:22.781062    3576 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1958/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0815 16:23:22.781116    3576 ssh_runner.go:195] Run: ls
	I0815 16:23:22.784265    3576 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0815 16:23:22.787433    3576 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0815 16:23:22.787444    3576 status.go:422] ha-138000 apiserver status = Running (err=<nil>)
	I0815 16:23:22.787455    3576 status.go:257] ha-138000 status: &{Name:ha-138000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 16:23:22.787466    3576 status.go:255] checking status of ha-138000-m02 ...
	I0815 16:23:22.787753    3576 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:23:22.787775    3576 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:23:22.796373    3576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51896
	I0815 16:23:22.796740    3576 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:23:22.797089    3576 main.go:141] libmachine: Using API Version  1
	I0815 16:23:22.797105    3576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:23:22.797323    3576 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:23:22.797427    3576 main.go:141] libmachine: (ha-138000-m02) Calling .GetState
	I0815 16:23:22.797507    3576 main.go:141] libmachine: (ha-138000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:23:22.797578    3576 main.go:141] libmachine: (ha-138000-m02) DBG | hyperkit pid from json: 3094
	I0815 16:23:22.798562    3576 main.go:141] libmachine: (ha-138000-m02) DBG | hyperkit pid 3094 missing from process table
	I0815 16:23:22.798582    3576 status.go:330] ha-138000-m02 host status = "Stopped" (err=<nil>)
	I0815 16:23:22.798590    3576 status.go:343] host is not running, skipping remaining checks
	I0815 16:23:22.798596    3576 status.go:257] ha-138000-m02 status: &{Name:ha-138000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 16:23:22.798610    3576 status.go:255] checking status of ha-138000-m03 ...
	I0815 16:23:22.798882    3576 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:23:22.798902    3576 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:23:22.807640    3576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51898
	I0815 16:23:22.808000    3576 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:23:22.808352    3576 main.go:141] libmachine: Using API Version  1
	I0815 16:23:22.808366    3576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:23:22.808574    3576 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:23:22.808678    3576 main.go:141] libmachine: (ha-138000-m03) Calling .GetState
	I0815 16:23:22.808760    3576 main.go:141] libmachine: (ha-138000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:23:22.808840    3576 main.go:141] libmachine: (ha-138000-m03) DBG | hyperkit pid from json: 3119
	I0815 16:23:22.809823    3576 status.go:330] ha-138000-m03 host status = "Running" (err=<nil>)
	I0815 16:23:22.809834    3576 host.go:66] Checking if "ha-138000-m03" exists ...
	I0815 16:23:22.810088    3576 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:23:22.810112    3576 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:23:22.818616    3576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51900
	I0815 16:23:22.818971    3576 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:23:22.819312    3576 main.go:141] libmachine: Using API Version  1
	I0815 16:23:22.819336    3576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:23:22.819538    3576 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:23:22.819659    3576 main.go:141] libmachine: (ha-138000-m03) Calling .GetIP
	I0815 16:23:22.819745    3576 host.go:66] Checking if "ha-138000-m03" exists ...
	I0815 16:23:22.819987    3576 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:23:22.820011    3576 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:23:22.828399    3576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51902
	I0815 16:23:22.828749    3576 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:23:22.829067    3576 main.go:141] libmachine: Using API Version  1
	I0815 16:23:22.829083    3576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:23:22.829298    3576 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:23:22.829404    3576 main.go:141] libmachine: (ha-138000-m03) Calling .DriverName
	I0815 16:23:22.829535    3576 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 16:23:22.829546    3576 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHHostname
	I0815 16:23:22.829617    3576 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHPort
	I0815 16:23:22.829696    3576 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHKeyPath
	I0815 16:23:22.829767    3576 main.go:141] libmachine: (ha-138000-m03) Calling .GetSSHUsername
	I0815 16:23:22.829841    3576 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m03/id_rsa Username:docker}
	I0815 16:23:22.863959    3576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 16:23:22.875477    3576 kubeconfig.go:125] found "ha-138000" server: "https://192.169.0.254:8443"
	I0815 16:23:22.875492    3576 api_server.go:166] Checking apiserver status ...
	I0815 16:23:22.875532    3576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 16:23:22.886398    3576 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1837/cgroup
	W0815 16:23:22.893959    3576 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1837/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0815 16:23:22.894005    3576 ssh_runner.go:195] Run: ls
	I0815 16:23:22.897286    3576 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0815 16:23:22.900337    3576 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0815 16:23:22.900349    3576 status.go:422] ha-138000-m03 apiserver status = Running (err=<nil>)
	I0815 16:23:22.900357    3576 status.go:257] ha-138000-m03 status: &{Name:ha-138000-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 16:23:22.900366    3576 status.go:255] checking status of ha-138000-m04 ...
	I0815 16:23:22.900633    3576 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:23:22.900655    3576 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:23:22.909431    3576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51906
	I0815 16:23:22.909791    3576 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:23:22.910114    3576 main.go:141] libmachine: Using API Version  1
	I0815 16:23:22.910125    3576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:23:22.910324    3576 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:23:22.910424    3576 main.go:141] libmachine: (ha-138000-m04) Calling .GetState
	I0815 16:23:22.910504    3576 main.go:141] libmachine: (ha-138000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:23:22.910581    3576 main.go:141] libmachine: (ha-138000-m04) DBG | hyperkit pid from json: 3240
	I0815 16:23:22.911561    3576 status.go:330] ha-138000-m04 host status = "Running" (err=<nil>)
	I0815 16:23:22.911572    3576 host.go:66] Checking if "ha-138000-m04" exists ...
	I0815 16:23:22.911829    3576 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:23:22.911849    3576 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:23:22.920461    3576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51908
	I0815 16:23:22.920849    3576 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:23:22.921185    3576 main.go:141] libmachine: Using API Version  1
	I0815 16:23:22.921195    3576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:23:22.921424    3576 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:23:22.921533    3576 main.go:141] libmachine: (ha-138000-m04) Calling .GetIP
	I0815 16:23:22.921624    3576 host.go:66] Checking if "ha-138000-m04" exists ...
	I0815 16:23:22.921884    3576 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:23:22.921907    3576 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:23:22.930257    3576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51910
	I0815 16:23:22.930600    3576 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:23:22.930962    3576 main.go:141] libmachine: Using API Version  1
	I0815 16:23:22.930979    3576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:23:22.931192    3576 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:23:22.931308    3576 main.go:141] libmachine: (ha-138000-m04) Calling .DriverName
	I0815 16:23:22.931434    3576 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 16:23:22.931446    3576 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHHostname
	I0815 16:23:22.931532    3576 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHPort
	I0815 16:23:22.931608    3576 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHKeyPath
	I0815 16:23:22.931688    3576 main.go:141] libmachine: (ha-138000-m04) Calling .GetSSHUsername
	I0815 16:23:22.931772    3576 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/ha-138000-m04/id_rsa Username:docker}
	I0815 16:23:22.966100    3576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 16:23:22.976601    3576 status.go:257] ha-138000-m04 status: &{Name:ha-138000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (8.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (40.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 node start m02 -v=7 --alsologtostderr
E0815 16:23:57.653595    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/addons-640000/client.crt: no such file or directory" logger="UnhandledError"
E0815 16:23:59.779935    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/functional-506000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-darwin-amd64 -p ha-138000 node start m02 -v=7 --alsologtostderr: (39.916935119s)
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-138000 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (40.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.35s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (38.09s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-277000 --driver=hyperkit 
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-277000 --driver=hyperkit : (38.092524001s)
--- PASS: TestImageBuild/serial/Setup (38.09s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.66s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-277000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-277000: (1.658671168s)
--- PASS: TestImageBuild/serial/NormalBuild (1.66s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.75s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-277000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.75s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.6s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-277000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.60s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.67s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-277000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.67s)

                                                
                                    
x
+
TestJSONOutput/start/Command (51.87s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-706000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-706000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit : (51.874264939s)
--- PASS: TestJSONOutput/start/Command (51.87s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.48s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-706000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.48s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.45s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-706000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.45s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.34s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-706000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-706000 --output=json --user=testUser: (8.342089282s)
--- PASS: TestJSONOutput/stop/Command (8.34s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.57s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-137000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-137000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (360.472738ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2b333575-d9e6-40de-9ade-2cac121b60ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-137000] minikube v1.33.1 on Darwin 14.6.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"82e4d9c6-0cff-48ba-bba8-f3dd66d755ca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19452"}}
	{"specversion":"1.0","id":"02765bf6-9ce6-4a5f-b4f9-3512905c58f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19452-977/kubeconfig"}}
	{"specversion":"1.0","id":"cfbf1a89-6f2c-4fd0-a3d1-6affec912613","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"8875387e-ffb3-4d06-9807-4d4e672a2467","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"46a0e2e0-ab34-42be-81cf-aae8421bdf4a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-977/.minikube"}}
	{"specversion":"1.0","id":"9478b1e3-1aa5-4feb-9403-59295cd222f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1d9776c1-93b6-4694-ad39-d3f4d00cb335","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-137000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-137000
--- PASS: TestErrorJSONOutput (0.57s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (89.02s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-375000 --driver=hyperkit 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-375000 --driver=hyperkit : (40.322624749s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-386000 --driver=hyperkit 
E0815 16:37:37.873127    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/functional-506000/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-386000 --driver=hyperkit : (37.315570341s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-375000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-386000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-386000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-386000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-386000: (5.280293544s)
helpers_test.go:175: Cleaning up "first-375000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-375000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-375000: (5.285974845s)
--- PASS: TestMinikubeProfile (89.02s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (106.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-562000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit 
multinode_test.go:96: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-562000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit : (1m46.0859816s)
multinode_test.go:102: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-562000 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (106.33s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-562000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-562000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-562000 -- rollout status deployment/busybox: (2.772232437s)
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-562000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-562000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-562000 -- exec busybox-7dff88458-bdvxj -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-562000 -- exec busybox-7dff88458-wcc7l -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-562000 -- exec busybox-7dff88458-bdvxj -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-562000 -- exec busybox-7dff88458-wcc7l -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-562000 -- exec busybox-7dff88458-bdvxj -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-562000 -- exec busybox-7dff88458-wcc7l -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.43s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-562000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-562000 -- exec busybox-7dff88458-bdvxj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-562000 -- exec busybox-7dff88458-bdvxj -- sh -c "ping -c 1 192.169.0.1"
multinode_test.go:572: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-562000 -- exec busybox-7dff88458-wcc7l -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-562000 -- exec busybox-7dff88458-wcc7l -- sh -c "ping -c 1 192.169.0.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.89s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (45.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-562000 -v 3 --alsologtostderr
E0815 16:42:37.874243    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/functional-506000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-562000 -v 3 --alsologtostderr: (45.150654057s)
multinode_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-562000 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (45.47s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-562000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.05s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.18s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (5.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-562000 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-562000 cp testdata/cp-test.txt multinode-562000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-562000 ssh -n multinode-562000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-562000 cp multinode-562000:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiNodeserialCopyFile1835507836/001/cp-test_multinode-562000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-562000 ssh -n multinode-562000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-562000 cp multinode-562000:/home/docker/cp-test.txt multinode-562000-m02:/home/docker/cp-test_multinode-562000_multinode-562000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-562000 ssh -n multinode-562000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-562000 ssh -n multinode-562000-m02 "sudo cat /home/docker/cp-test_multinode-562000_multinode-562000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-562000 cp multinode-562000:/home/docker/cp-test.txt multinode-562000-m03:/home/docker/cp-test_multinode-562000_multinode-562000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-562000 ssh -n multinode-562000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-562000 ssh -n multinode-562000-m03 "sudo cat /home/docker/cp-test_multinode-562000_multinode-562000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-562000 cp testdata/cp-test.txt multinode-562000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-562000 ssh -n multinode-562000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-562000 cp multinode-562000-m02:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiNodeserialCopyFile1835507836/001/cp-test_multinode-562000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-562000 ssh -n multinode-562000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-562000 cp multinode-562000-m02:/home/docker/cp-test.txt multinode-562000:/home/docker/cp-test_multinode-562000-m02_multinode-562000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-562000 ssh -n multinode-562000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-562000 ssh -n multinode-562000 "sudo cat /home/docker/cp-test_multinode-562000-m02_multinode-562000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-562000 cp multinode-562000-m02:/home/docker/cp-test.txt multinode-562000-m03:/home/docker/cp-test_multinode-562000-m02_multinode-562000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-562000 ssh -n multinode-562000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-562000 ssh -n multinode-562000-m03 "sudo cat /home/docker/cp-test_multinode-562000-m02_multinode-562000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-562000 cp testdata/cp-test.txt multinode-562000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-562000 ssh -n multinode-562000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-562000 cp multinode-562000-m03:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiNodeserialCopyFile1835507836/001/cp-test_multinode-562000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-562000 ssh -n multinode-562000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-562000 cp multinode-562000-m03:/home/docker/cp-test.txt multinode-562000:/home/docker/cp-test_multinode-562000-m03_multinode-562000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-562000 ssh -n multinode-562000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-562000 ssh -n multinode-562000 "sudo cat /home/docker/cp-test_multinode-562000-m03_multinode-562000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-562000 cp multinode-562000-m03:/home/docker/cp-test.txt multinode-562000-m02:/home/docker/cp-test_multinode-562000-m03_multinode-562000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-562000 ssh -n multinode-562000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-562000 ssh -n multinode-562000-m02 "sudo cat /home/docker/cp-test_multinode-562000-m03_multinode-562000-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (5.23s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-562000 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-darwin-amd64 -p multinode-562000 node stop m03: (2.324731065s)
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-562000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-562000 status: exit status 7 (258.030515ms)

                                                
                                                
-- stdout --
	multinode-562000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-562000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-562000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-562000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-562000 status --alsologtostderr: exit status 7 (253.118208ms)

                                                
                                                
-- stdout --
	multinode-562000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-562000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-562000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 16:43:15.968047    5002 out.go:345] Setting OutFile to fd 1 ...
	I0815 16:43:15.968241    5002 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:43:15.968246    5002 out.go:358] Setting ErrFile to fd 2...
	I0815 16:43:15.968250    5002 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:43:15.968438    5002 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-977/.minikube/bin
	I0815 16:43:15.968638    5002 out.go:352] Setting JSON to false
	I0815 16:43:15.968657    5002 mustload.go:65] Loading cluster: multinode-562000
	I0815 16:43:15.968706    5002 notify.go:220] Checking for updates...
	I0815 16:43:15.968991    5002 config.go:182] Loaded profile config "multinode-562000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:43:15.969006    5002 status.go:255] checking status of multinode-562000 ...
	I0815 16:43:15.969383    5002 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:43:15.969443    5002 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:43:15.978384    5002 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53351
	I0815 16:43:15.978753    5002 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:43:15.979173    5002 main.go:141] libmachine: Using API Version  1
	I0815 16:43:15.979197    5002 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:43:15.979428    5002 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:43:15.979567    5002 main.go:141] libmachine: (multinode-562000) Calling .GetState
	I0815 16:43:15.979653    5002 main.go:141] libmachine: (multinode-562000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:43:15.979740    5002 main.go:141] libmachine: (multinode-562000) DBG | hyperkit pid from json: 4684
	I0815 16:43:15.980909    5002 status.go:330] multinode-562000 host status = "Running" (err=<nil>)
	I0815 16:43:15.980929    5002 host.go:66] Checking if "multinode-562000" exists ...
	I0815 16:43:15.981183    5002 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:43:15.981209    5002 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:43:15.989718    5002 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53353
	I0815 16:43:15.990071    5002 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:43:15.990415    5002 main.go:141] libmachine: Using API Version  1
	I0815 16:43:15.990432    5002 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:43:15.990636    5002 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:43:15.990750    5002 main.go:141] libmachine: (multinode-562000) Calling .GetIP
	I0815 16:43:15.990840    5002 host.go:66] Checking if "multinode-562000" exists ...
	I0815 16:43:15.991093    5002 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:43:15.991119    5002 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:43:15.999998    5002 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53355
	I0815 16:43:16.000383    5002 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:43:16.000701    5002 main.go:141] libmachine: Using API Version  1
	I0815 16:43:16.000710    5002 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:43:16.000922    5002 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:43:16.001045    5002 main.go:141] libmachine: (multinode-562000) Calling .DriverName
	I0815 16:43:16.001191    5002 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 16:43:16.001211    5002 main.go:141] libmachine: (multinode-562000) Calling .GetSSHHostname
	I0815 16:43:16.001289    5002 main.go:141] libmachine: (multinode-562000) Calling .GetSSHPort
	I0815 16:43:16.001378    5002 main.go:141] libmachine: (multinode-562000) Calling .GetSSHKeyPath
	I0815 16:43:16.001458    5002 main.go:141] libmachine: (multinode-562000) Calling .GetSSHUsername
	I0815 16:43:16.001552    5002 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/multinode-562000/id_rsa Username:docker}
	I0815 16:43:16.033501    5002 ssh_runner.go:195] Run: systemctl --version
	I0815 16:43:16.037807    5002 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 16:43:16.049043    5002 kubeconfig.go:125] found "multinode-562000" server: "https://192.169.0.14:8443"
	I0815 16:43:16.049066    5002 api_server.go:166] Checking apiserver status ...
	I0815 16:43:16.049110    5002 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 16:43:16.060969    5002 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1972/cgroup
	W0815 16:43:16.068832    5002 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1972/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0815 16:43:16.068881    5002 ssh_runner.go:195] Run: ls
	I0815 16:43:16.072288    5002 api_server.go:253] Checking apiserver healthz at https://192.169.0.14:8443/healthz ...
	I0815 16:43:16.075436    5002 api_server.go:279] https://192.169.0.14:8443/healthz returned 200:
	ok
	I0815 16:43:16.075448    5002 status.go:422] multinode-562000 apiserver status = Running (err=<nil>)
	I0815 16:43:16.075457    5002 status.go:257] multinode-562000 status: &{Name:multinode-562000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 16:43:16.075467    5002 status.go:255] checking status of multinode-562000-m02 ...
	I0815 16:43:16.075730    5002 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:43:16.075752    5002 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:43:16.084496    5002 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53359
	I0815 16:43:16.084875    5002 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:43:16.085183    5002 main.go:141] libmachine: Using API Version  1
	I0815 16:43:16.085191    5002 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:43:16.085429    5002 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:43:16.085553    5002 main.go:141] libmachine: (multinode-562000-m02) Calling .GetState
	I0815 16:43:16.085637    5002 main.go:141] libmachine: (multinode-562000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:43:16.085722    5002 main.go:141] libmachine: (multinode-562000-m02) DBG | hyperkit pid from json: 4714
	I0815 16:43:16.086914    5002 status.go:330] multinode-562000-m02 host status = "Running" (err=<nil>)
	I0815 16:43:16.086925    5002 host.go:66] Checking if "multinode-562000-m02" exists ...
	I0815 16:43:16.087187    5002 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:43:16.087227    5002 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:43:16.095964    5002 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53361
	I0815 16:43:16.096330    5002 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:43:16.096675    5002 main.go:141] libmachine: Using API Version  1
	I0815 16:43:16.096696    5002 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:43:16.096911    5002 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:43:16.097022    5002 main.go:141] libmachine: (multinode-562000-m02) Calling .GetIP
	I0815 16:43:16.097103    5002 host.go:66] Checking if "multinode-562000-m02" exists ...
	I0815 16:43:16.097399    5002 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:43:16.097423    5002 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:43:16.106088    5002 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53363
	I0815 16:43:16.106446    5002 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:43:16.106784    5002 main.go:141] libmachine: Using API Version  1
	I0815 16:43:16.106800    5002 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:43:16.107049    5002 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:43:16.107174    5002 main.go:141] libmachine: (multinode-562000-m02) Calling .DriverName
	I0815 16:43:16.107315    5002 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 16:43:16.107327    5002 main.go:141] libmachine: (multinode-562000-m02) Calling .GetSSHHostname
	I0815 16:43:16.107413    5002 main.go:141] libmachine: (multinode-562000-m02) Calling .GetSSHPort
	I0815 16:43:16.107510    5002 main.go:141] libmachine: (multinode-562000-m02) Calling .GetSSHKeyPath
	I0815 16:43:16.107585    5002 main.go:141] libmachine: (multinode-562000-m02) Calling .GetSSHUsername
	I0815 16:43:16.107664    5002 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19452-977/.minikube/machines/multinode-562000-m02/id_rsa Username:docker}
	I0815 16:43:16.142520    5002 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 16:43:16.153262    5002 status.go:257] multinode-562000-m02 status: &{Name:multinode-562000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0815 16:43:16.153281    5002 status.go:255] checking status of multinode-562000-m03 ...
	I0815 16:43:16.153552    5002 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:43:16.153576    5002 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:43:16.162229    5002 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53366
	I0815 16:43:16.162621    5002 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:43:16.162986    5002 main.go:141] libmachine: Using API Version  1
	I0815 16:43:16.162999    5002 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:43:16.163219    5002 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:43:16.163336    5002 main.go:141] libmachine: (multinode-562000-m03) Calling .GetState
	I0815 16:43:16.163426    5002 main.go:141] libmachine: (multinode-562000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:43:16.163518    5002 main.go:141] libmachine: (multinode-562000-m03) DBG | hyperkit pid from json: 4791
	I0815 16:43:16.164676    5002 main.go:141] libmachine: (multinode-562000-m03) DBG | hyperkit pid 4791 missing from process table
	I0815 16:43:16.164725    5002 status.go:330] multinode-562000-m03 host status = "Stopped" (err=<nil>)
	I0815 16:43:16.164736    5002 status.go:343] host is not running, skipping remaining checks
	I0815 16:43:16.164743    5002 status.go:257] multinode-562000-m03 status: &{Name:multinode-562000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.84s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (36.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-562000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-darwin-amd64 -p multinode-562000 node start m03 -v=7 --alsologtostderr: (36.150209599s)
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-562000 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (36.51s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (151.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-562000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-562000
E0815 16:43:57.690602    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/addons-640000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-562000: (18.841905449s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-562000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-562000 --wait=true -v=8 --alsologtostderr: (2m12.136165798s)
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-562000
--- PASS: TestMultiNode/serial/RestartKeepsNodes (151.09s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (3.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-562000 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-darwin-amd64 -p multinode-562000 node delete m03: (2.923597997s)
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-562000 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (3.26s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (16.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-562000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-amd64 -p multinode-562000 stop: (16.612632589s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-562000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-562000 status: exit status 7 (79.290439ms)

                                                
                                                
-- stdout --
	multinode-562000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-562000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-562000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-562000 status --alsologtostderr: exit status 7 (78.609108ms)

                                                
                                                
-- stdout --
	multinode-562000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-562000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 16:46:43.776934    5150 out.go:345] Setting OutFile to fd 1 ...
	I0815 16:46:43.777211    5150 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:46:43.777216    5150 out.go:358] Setting ErrFile to fd 2...
	I0815 16:46:43.777220    5150 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 16:46:43.777396    5150 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19452-977/.minikube/bin
	I0815 16:46:43.777582    5150 out.go:352] Setting JSON to false
	I0815 16:46:43.777609    5150 mustload.go:65] Loading cluster: multinode-562000
	I0815 16:46:43.777648    5150 notify.go:220] Checking for updates...
	I0815 16:46:43.777897    5150 config.go:182] Loaded profile config "multinode-562000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 16:46:43.777918    5150 status.go:255] checking status of multinode-562000 ...
	I0815 16:46:43.778276    5150 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:46:43.778319    5150 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:46:43.786832    5150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53596
	I0815 16:46:43.787242    5150 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:46:43.787664    5150 main.go:141] libmachine: Using API Version  1
	I0815 16:46:43.787674    5150 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:46:43.787889    5150 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:46:43.787996    5150 main.go:141] libmachine: (multinode-562000) Calling .GetState
	I0815 16:46:43.788087    5150 main.go:141] libmachine: (multinode-562000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:46:43.788151    5150 main.go:141] libmachine: (multinode-562000) DBG | hyperkit pid from json: 5067
	I0815 16:46:43.789015    5150 main.go:141] libmachine: (multinode-562000) DBG | hyperkit pid 5067 missing from process table
	I0815 16:46:43.789035    5150 status.go:330] multinode-562000 host status = "Stopped" (err=<nil>)
	I0815 16:46:43.789044    5150 status.go:343] host is not running, skipping remaining checks
	I0815 16:46:43.789051    5150 status.go:257] multinode-562000 status: &{Name:multinode-562000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 16:46:43.789070    5150 status.go:255] checking status of multinode-562000-m02 ...
	I0815 16:46:43.789297    5150 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0815 16:46:43.789320    5150 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0815 16:46:43.797662    5150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53598
	I0815 16:46:43.797983    5150 main.go:141] libmachine: () Calling .GetVersion
	I0815 16:46:43.798341    5150 main.go:141] libmachine: Using API Version  1
	I0815 16:46:43.798370    5150 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 16:46:43.798560    5150 main.go:141] libmachine: () Calling .GetMachineName
	I0815 16:46:43.798682    5150 main.go:141] libmachine: (multinode-562000-m02) Calling .GetState
	I0815 16:46:43.798775    5150 main.go:141] libmachine: (multinode-562000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0815 16:46:43.798838    5150 main.go:141] libmachine: (multinode-562000-m02) DBG | hyperkit pid from json: 5084
	I0815 16:46:43.799709    5150 main.go:141] libmachine: (multinode-562000-m02) DBG | hyperkit pid 5084 missing from process table
	I0815 16:46:43.799763    5150 status.go:330] multinode-562000-m02 host status = "Stopped" (err=<nil>)
	I0815 16:46:43.799772    5150 status.go:343] host is not running, skipping remaining checks
	I0815 16:46:43.799778    5150 status.go:257] multinode-562000-m02 status: &{Name:multinode-562000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (16.77s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (107.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-562000 --wait=true -v=8 --alsologtostderr --driver=hyperkit 
E0815 16:47:00.758827    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/addons-640000/client.crt: no such file or directory" logger="UnhandledError"
E0815 16:47:37.977503    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/functional-506000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-562000 --wait=true -v=8 --alsologtostderr --driver=hyperkit : (1m47.453059787s)
multinode_test.go:382: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-562000 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (107.79s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (44.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-562000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-562000-m02 --driver=hyperkit 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-562000-m02 --driver=hyperkit : exit status 14 (418.360568ms)

                                                
                                                
-- stdout --
	* [multinode-562000-m02] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-977/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-977/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-562000-m02' is duplicated with machine name 'multinode-562000-m02' in profile 'multinode-562000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-562000-m03 --driver=hyperkit 
E0815 16:48:57.795183    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/addons-640000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-562000-m03 --driver=hyperkit : (38.24994836s)
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-562000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-562000: exit status 80 (298.834352ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-562000 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-562000-m03 already exists in multinode-562000-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-562000-m03
multinode_test.go:484: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-562000-m03: (5.245148846s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (44.27s)

                                                
                                    
x
+
TestPreload (145.74s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-757000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-757000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4: (1m18.432744911s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-757000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-757000 image pull gcr.io/k8s-minikube/busybox: (1.194710178s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-757000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-757000: (8.374075288s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-757000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit 
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-757000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit : (52.333411266s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-757000 image list
helpers_test.go:175: Cleaning up "test-preload-757000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-757000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-757000: (5.244835891s)
--- PASS: TestPreload (145.74s)

                                                
                                    
x
+
TestSkaffold (114.69s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe2023022038 version
skaffold_test.go:59: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe2023022038 version: (1.723046578s)
skaffold_test.go:63: skaffold version: v2.13.1
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-329000 --memory=2600 --driver=hyperkit 
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-329000 --memory=2600 --driver=hyperkit : (39.72224877s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe2023022038 run --minikube-profile skaffold-329000 --kube-context skaffold-329000 --status-check=true --port-forward=false --interactive=false
E0815 16:55:41.059034    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/functional-506000/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:105: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe2023022038 run --minikube-profile skaffold-329000 --kube-context skaffold-329000 --status-check=true --port-forward=false --interactive=false: (55.258409966s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-67cc8f56cb-cbs9w" [abd39bea-52f5-4d39-bb2e-df9f540e8869] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.007831642s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-77c74c8d88-hwnl2" [eb2637a4-f869-4040-9560-67ef8190474d] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.004197926s
helpers_test.go:175: Cleaning up "skaffold-329000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-329000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-329000: (5.244135357s)
--- PASS: TestSkaffold (114.69s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (75.74s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.2420920355 start -p running-upgrade-342000 --memory=2200 --vm-driver=hyperkit 
version_upgrade_test.go:120: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.2420920355 start -p running-upgrade-342000 --memory=2200 --vm-driver=hyperkit : (40.260536155s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 start -p running-upgrade-342000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:130: (dbg) Done: out/minikube-darwin-amd64 start -p running-upgrade-342000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (29.069327632s)
helpers_test.go:175: Cleaning up "running-upgrade-342000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-342000
E0815 17:00:52.548924    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/skaffold-329000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:00:52.556578    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/skaffold-329000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:00:52.568893    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/skaffold-329000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:00:52.590607    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/skaffold-329000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:00:52.633993    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/skaffold-329000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:00:52.717386    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/skaffold-329000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:00:52.879416    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/skaffold-329000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:00:53.201037    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/skaffold-329000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:00:53.843957    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/skaffold-329000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-342000: (5.268064311s)
--- PASS: TestRunningBinaryUpgrade (75.74s)

                                                
                                    
x
+
TestKubernetesUpgrade (203.94s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-980000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperkit 
E0815 16:57:37.988072    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/functional-506000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:222: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-980000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperkit : (1m49.56428878s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-980000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-980000: (2.37172249s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-980000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-980000 status --format={{.Host}}: exit status 7 (67.345793ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-980000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:243: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-980000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=hyperkit : (57.236599199s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-980000 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-980000 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperkit 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-980000 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperkit : exit status 106 (464.262817ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-980000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-977/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-977/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-980000
	    minikube start -p kubernetes-upgrade-980000 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9800002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-980000 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-980000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:275: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-980000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=hyperkit : (28.906881489s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-980000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-980000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-980000: (5.285241404s)
--- PASS: TestKubernetesUpgrade (203.94s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.07s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.07s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (112.51s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.2906478053 start -p stopped-upgrade-165000 --memory=2200 --vm-driver=hyperkit 
E0815 16:58:57.806108    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/addons-640000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.2906478053 start -p stopped-upgrade-165000 --memory=2200 --vm-driver=hyperkit : (59.813881811s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.2906478053 -p stopped-upgrade-165000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.2906478053 -p stopped-upgrade-165000 stop: (8.225112185s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-amd64 start -p stopped-upgrade-165000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:198: (dbg) Done: out/minikube-darwin-amd64 start -p stopped-upgrade-165000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (44.466953248s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (112.51s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.93s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-165000
version_upgrade_test.go:206: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-165000: (2.926091956s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-614000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-614000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit : exit status 14 (484.395444ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-614000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19452-977/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19452-977/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (85.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-614000 --driver=hyperkit 
E0815 17:00:55.127344    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/skaffold-329000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:00:57.690115    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/skaffold-329000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:01:02.813785    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/skaffold-329000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:01:13.057512    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/skaffold-329000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:01:33.539911    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/skaffold-329000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:02:14.502948    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/skaffold-329000/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-614000 --driver=hyperkit : (1m25.12810325s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-614000 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (85.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (56.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-614000 --no-kubernetes --driver=hyperkit 
E0815 17:02:37.993549    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/functional-506000/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-614000 --no-kubernetes --driver=hyperkit : (54.117865153s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-614000 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-614000 status -o json: exit status 2 (153.239234ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-614000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-614000
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-614000: (2.367740654s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (56.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (70.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-614000 --no-kubernetes --driver=hyperkit 
E0815 17:03:36.428030    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/skaffold-329000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:03:40.884410    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/addons-640000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:03:57.812156    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/addons-640000/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-614000 --no-kubernetes --driver=hyperkit : (1m10.40084252s)
--- PASS: TestNoKubernetes/serial/Start (70.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-614000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-614000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (126.274825ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-614000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-614000: (2.361476954s)
--- PASS: TestNoKubernetes/serial/Stop (2.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (75.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-614000 --driver=hyperkit 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-614000 --driver=hyperkit : (1m15.581712576s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (75.58s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.73s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin
- MINIKUBE_LOCATION=19452
- KUBECONFIG=/Users/jenkins/minikube-integration/19452-977/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1877462684/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1877462684/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1877462684/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1877462684/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.73s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (7.74s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin
- MINIKUBE_LOCATION=19452
- KUBECONFIG=/Users/jenkins/minikube-integration/19452-977/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2015097049/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2015097049/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2015097049/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2015097049/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (7.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-614000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-614000 "sudo systemctl is-active --quiet service kubelet": exit status 80 (159.703843ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node NoKubernetes-614000 host status: state: docker-machine-driver-hyperkit needs to run with elevated permissions. Please run the following command, then try again: sudo chown root:wheel /Users/jenkins/workspace/testdata/hyperkit-driver-without-version/docker-machine-driver-hyperkit && sudo chmod u+s /Users/jenkins/workspace/testdata/hyperkit-driver-without-version/docker-machine-driver-hyperkit

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (684.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-652000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=hyperkit 
E0815 17:13:57.864091    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/addons-640000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:15:52.607166    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/skaffold-329000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:17:15.689680    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/skaffold-329000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:17:38.051725    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/functional-506000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:18:57.871260    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/addons-640000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:20:20.947448    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/addons-640000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:20:52.614002    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/skaffold-329000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:22:38.068769    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/functional-506000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:23:57.894228    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/addons-640000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p auto-652000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=hyperkit : (11m24.286051656s)
--- PASS: TestNetworkPlugins/group/auto/Start (684.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-652000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-652000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-v7nps" [08ab298c-894c-4281-8ec8-d034f0d9f50a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-v7nps" [08ab298c-894c-4281-8ec8-d034f0d9f50a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.00209504s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-652000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-652000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-652000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (656.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-652000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=hyperkit 
E0815 17:25:52.638207    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/skaffold-329000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:27:38.082371    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/functional-506000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:28:57.900866    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/addons-640000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:29:01.161526    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/functional-506000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:30:16.350916    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/auto-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:30:16.358541    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/auto-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:30:16.371707    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/auto-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:30:16.394697    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/auto-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:30:16.438133    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/auto-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:30:16.521021    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/auto-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:30:16.684569    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/auto-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:30:17.006267    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/auto-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:30:17.648720    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/auto-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:30:18.931867    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/auto-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:30:21.494906    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/auto-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:30:26.618289    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/auto-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:30:36.861217    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/auto-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:30:52.641979    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/skaffold-329000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:30:57.344423    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/auto-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:31:38.308631    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/auto-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:32:38.087477    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/functional-506000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:33:00.231708    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/auto-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:33:55.730041    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/skaffold-329000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:33:57.904600    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/addons-640000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:35:16.356643    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/auto-652000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-652000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=hyperkit : (10m56.645454436s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (656.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (199.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-652000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperkit 
E0815 17:35:44.076599    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/auto-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:35:52.648130    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/skaffold-329000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p calico-652000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperkit : (3m19.3583052s)
--- PASS: TestNetworkPlugins/group/calico/Start (199.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-rbgmv" [f19baa04-8f87-4e5c-8ecf-634d62453baa] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005408882s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-652000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-652000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-f6wtg" [48ba5afc-18d5-41d2-9053-a07a247bd0cd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-f6wtg" [48ba5afc-18d5-41d2-9053-a07a247bd0cd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.004383681s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-652000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-652000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-652000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (54.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-flannel-652000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=hyperkit 
E0815 17:37:38.117794    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/functional-506000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-flannel-652000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=hyperkit : (54.915572975s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (54.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p custom-flannel-652000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-652000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-64ccp" [3e4f7fde-1df6-45e4-bd66-d88fb41e8372] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-64ccp" [3e4f7fde-1df6-45e4-bd66-d88fb41e8372] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.003176194s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-652000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-652000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-652000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (81.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p false-652000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=hyperkit 
E0815 17:38:57.950818    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/addons-640000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p false-652000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=hyperkit : (1m21.439568992s)
--- PASS: TestNetworkPlugins/group/false/Start (81.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-jl4t7" [bdfb7428-c875-4aab-86f7-b6351cb348c0] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003229489s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p calico-652000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-652000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-br88b" [f0f53fc7-b6cd-4182-bfde-a575d8f61732] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-br88b" [f0f53fc7-b6cd-4182-bfde-a575d8f61732] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.004598313s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-652000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-652000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-652000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (81.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-652000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-652000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=hyperkit : (1m21.396743746s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (81.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-652000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (12.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-652000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-sjggq" [2b0ed956-73e4-4806-b70e-aa1e65e53fc6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-sjggq" [2b0ed956-73e4-4806-b70e-aa1e65e53fc6] Running
E0815 17:40:16.402128    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/auto-652000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 12.003160014s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (12.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-652000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-652000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-652000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (51.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p flannel-652000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=hyperkit 
E0815 17:40:52.694365    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/skaffold-329000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p flannel-652000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=hyperkit : (51.112903102s)
--- PASS: TestNetworkPlugins/group/flannel/Start (51.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-652000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-652000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-9nrts" [4baf74c4-5569-4606-9c88-f0aede6ac076] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-9nrts" [4baf74c4-5569-4606-9c88-f0aede6ac076] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.002922392s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-652000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-652000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-652000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-b2krh" [1462352d-1365-4d79-a4d3-65ae14b5f1a8] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003514876s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (78.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-652000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-652000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=hyperkit : (1m18.939829211s)
--- PASS: TestNetworkPlugins/group/bridge/Start (78.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p flannel-652000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-652000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-7zh95" [1f6a5db8-7b9c-4ed9-ba25-1be1d198f295] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-7zh95" [1f6a5db8-7b9c-4ed9-ba25-1be1d198f295] Running
E0815 17:41:42.388979    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/kindnet-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:41:42.396486    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/kindnet-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:41:42.407675    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/kindnet-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:41:42.429600    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/kindnet-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:41:42.471156    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/kindnet-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:41:42.552562    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/kindnet-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:41:42.714073    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/kindnet-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:41:43.036026    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/kindnet-652000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.002757251s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-652000 exec deployment/netcat -- nslookup kubernetes.default
E0815 17:41:43.677800    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/kindnet-652000/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-652000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-652000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (46.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-652000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=hyperkit 
E0815 17:42:02.890253    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/kindnet-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:42:23.372329    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/kindnet-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:42:38.140070    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/functional-506000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-652000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=hyperkit : (46.622628358s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (46.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-652000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (13.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-652000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-9s294" [e911ddcd-6137-4425-af9a-981dfd1f3ec3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-9s294" [e911ddcd-6137-4425-af9a-981dfd1f3ec3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 13.004449243s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (13.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-652000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-652000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-x5sn6" [aad6ceac-83ab-4a5a-a042-65f06700cafb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-x5sn6" [aad6ceac-83ab-4a5a-a042-65f06700cafb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004172686s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-652000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-652000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-652000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (21.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-652000 exec deployment/netcat -- nslookup kubernetes.default
E0815 17:43:04.336898    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/kindnet-652000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:175: (dbg) Non-zero exit: kubectl --context kubenet-652000 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.104260869s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context kubenet-652000 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Done: kubectl --context kubenet-652000 exec deployment/netcat -- nslookup kubernetes.default: (5.133421293s)
--- PASS: TestNetworkPlugins/group/kubenet/DNS (21.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (141.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-255000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.20.0
E0815 17:43:18.632909    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/custom-flannel-652000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p old-k8s-version-255000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.20.0: (2m21.365762291s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (141.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-652000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-652000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.10s)
E0815 17:58:57.918659    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/addons-640000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:59:03.119514    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/calico-652000/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (90.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-442000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.31.0
E0815 17:43:54.480779    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/custom-flannel-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:43:57.957216    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/addons-640000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:44:03.159375    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/calico-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:44:03.165733    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/calico-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:44:03.177633    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/calico-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:44:03.199006    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/calico-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:44:03.240968    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/calico-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:44:03.322407    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/calico-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:44:03.484265    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/calico-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:44:03.805554    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/calico-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:44:04.446921    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/calico-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:44:05.728539    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/calico-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:44:08.290261    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/calico-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:44:13.412800    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/calico-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:44:23.654338    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/calico-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:44:26.261725    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/kindnet-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:44:35.444357    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/custom-flannel-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:44:44.137724    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/calico-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:45:04.491465    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/false-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:45:04.498008    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/false-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:45:04.510519    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/false-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:45:04.532100    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/false-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:45:04.573624    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/false-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:45:04.655439    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/false-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:45:04.817177    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/false-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:45:05.139149    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/false-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:45:05.780975    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/false-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:45:07.062611    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/false-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:45:09.624153    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/false-652000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-442000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.31.0: (1m30.025648437s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (90.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-442000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [766625ce-7471-4685-8d8f-dd8c2154336a] Pending
helpers_test.go:344: "busybox" [766625ce-7471-4685-8d8f-dd8c2154336a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [766625ce-7471-4685-8d8f-dd8c2154336a] Running
E0815 17:45:14.746434    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/false-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:45:16.408620    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/auto-652000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004488949s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-442000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.74s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-442000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-442000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.74s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (8.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-442000 --alsologtostderr -v=3
E0815 17:45:24.989781    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/false-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:45:25.101449    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/calico-652000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-442000 --alsologtostderr -v=3: (8.391575997s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (8.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-442000 -n no-preload-442000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-442000 -n no-preload-442000: exit status 7 (69.112581ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-442000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (293.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-442000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-442000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.31.0: (4m52.879945663s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-442000 -n no-preload-442000
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (293.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (7.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-255000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ee9e6840-189c-4bcd-924e-b444770a5ee7] Pending
helpers_test.go:344: "busybox" [ee9e6840-189c-4bcd-924e-b444770a5ee7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0815 17:45:41.224861    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/functional-506000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [ee9e6840-189c-4bcd-924e-b444770a5ee7] Running
E0815 17:45:45.471907    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/false-652000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 7.004616521s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-255000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (7.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.76s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-255000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-255000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (8.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-255000 --alsologtostderr -v=3
E0815 17:45:52.699925    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/skaffold-329000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-255000 --alsologtostderr -v=3: (8.40720644s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (8.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-255000 -n old-k8s-version-255000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-255000 -n old-k8s-version-255000: exit status 7 (68.566817ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-255000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (403.76s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-255000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.20.0
E0815 17:45:57.367749    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/custom-flannel-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:46:01.094577    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/enable-default-cni-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:46:01.101148    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/enable-default-cni-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:46:01.112534    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/enable-default-cni-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:46:01.134202    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/enable-default-cni-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:46:01.175793    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/enable-default-cni-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:46:01.257958    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/enable-default-cni-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:46:01.419410    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/enable-default-cni-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:46:01.741806    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/enable-default-cni-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:46:02.383745    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/enable-default-cni-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:46:03.665094    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/enable-default-cni-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:46:06.227290    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/enable-default-cni-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:46:11.349525    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/enable-default-cni-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:46:21.592397    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/enable-default-cni-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:46:25.042753    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/flannel-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:46:25.049755    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/flannel-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:46:25.061075    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/flannel-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:46:25.083121    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/flannel-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:46:25.126124    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/flannel-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:46:25.209316    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/flannel-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:46:25.371708    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/flannel-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:46:25.694298    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/flannel-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:46:26.336108    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/flannel-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:46:26.434984    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/false-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:46:27.617906    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/flannel-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:46:30.181595    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/flannel-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:46:35.303582    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/flannel-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:46:39.493657    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/auto-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:46:42.074237    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/enable-default-cni-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:46:42.395158    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/kindnet-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:46:45.545522    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/flannel-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:46:47.026612    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/calico-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:47:06.028180    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/flannel-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:47:10.106669    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/kindnet-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:47:23.037343    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/enable-default-cni-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:47:38.145607    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/functional-506000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:47:46.991197    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/flannel-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:47:48.358772    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/false-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:47:48.896978    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/kubenet-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:47:48.903839    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/kubenet-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:47:48.915596    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/kubenet-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:47:48.937394    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/kubenet-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:47:48.978739    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/kubenet-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:47:49.060026    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/kubenet-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:47:49.222493    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/kubenet-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:47:49.543982    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/kubenet-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:47:49.924428    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/bridge-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:47:49.931375    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/bridge-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:47:49.944119    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/bridge-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:47:49.967457    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/bridge-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:47:50.011095    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/bridge-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:47:50.092830    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/bridge-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:47:50.187628    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/kubenet-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:47:50.255458    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/bridge-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:47:50.577534    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/bridge-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:47:51.219178    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/bridge-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:47:51.470353    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/kubenet-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:47:52.501358    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/bridge-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:47:54.031817    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/kubenet-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:47:55.063900    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/bridge-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:47:59.153963    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/kubenet-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:48:00.185829    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/bridge-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:48:09.396083    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/kubenet-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:48:10.428187    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/bridge-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:48:13.503860    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/custom-flannel-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:48:29.878700    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/kubenet-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:48:30.911484    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/bridge-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:48:41.213619    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/custom-flannel-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:48:44.960662    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/enable-default-cni-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:48:57.964100    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/addons-640000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:49:03.166114    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/calico-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:49:08.915619    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/flannel-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:49:10.841103    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/kubenet-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:49:11.874535    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/bridge-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:49:30.871726    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/calico-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:50:04.497613    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/false-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:50:16.416053    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/auto-652000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p old-k8s-version-255000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.20.0: (6m43.586296392s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-255000 -n old-k8s-version-255000
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (403.76s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-96bzq" [865d7467-3987-4a9a-9763-d57ad618587c] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003846744s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-96bzq" [865d7467-3987-4a9a-9763-d57ad618587c] Running
E0815 17:50:32.203737    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/false-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:50:32.765753    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/kubenet-652000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004428342s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-442000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p no-preload-442000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-442000 --alsologtostderr -v=1
E0815 17:50:33.797908    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/bridge-652000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-442000 -n no-preload-442000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-442000 -n no-preload-442000: exit status 2 (161.369286ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-442000 -n no-preload-442000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-442000 -n no-preload-442000: exit status 2 (161.672867ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-442000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-442000 -n no-preload-442000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-442000 -n no-preload-442000
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (51.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-798000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.31.0
E0815 17:50:52.706848    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/skaffold-329000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:51:01.101567    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/enable-default-cni-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:51:25.049694    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/flannel-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:51:28.806971    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/enable-default-cni-652000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-798000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.31.0: (51.561560673s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (51.56s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-798000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2bc3c97f-491f-4e74-806e-de507be9b979] Pending
helpers_test.go:344: "busybox" [2bc3c97f-491f-4e74-806e-de507be9b979] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2bc3c97f-491f-4e74-806e-de507be9b979] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003664943s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-798000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.8s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-798000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0815 17:51:42.402314    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/kindnet-652000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-798000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.80s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (8.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-798000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-798000 --alsologtostderr -v=3: (8.421649052s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (8.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-798000 -n embed-certs-798000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-798000 -n embed-certs-798000: exit status 7 (68.890923ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-798000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (293.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-798000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.31.0
E0815 17:51:52.760951    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/flannel-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:52:38.099223    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/functional-506000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-798000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.31.0: (4m53.206152715s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-798000 -n embed-certs-798000
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (293.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-kmxbc" [104f19d9-810a-4836-938c-977b06af8145] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004042823s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-kmxbc" [104f19d9-810a-4836-938c-977b06af8145] Running
E0815 17:52:48.850585    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/kubenet-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:52:49.877942    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/bridge-652000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004513981s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-255000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-255000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (1.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p old-k8s-version-255000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-255000 -n old-k8s-version-255000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-255000 -n old-k8s-version-255000: exit status 2 (155.081477ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-255000 -n old-k8s-version-255000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-255000 -n old-k8s-version-255000: exit status 2 (159.290952ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p old-k8s-version-255000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-255000 -n old-k8s-version-255000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-255000 -n old-k8s-version-255000
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (1.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (84.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-423000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.31.0
E0815 17:53:13.458332    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/custom-flannel-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:53:16.559101    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/kubenet-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:53:17.590200    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/bridge-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:53:40.996222    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/addons-640000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:53:57.915699    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/addons-640000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:54:03.117772    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/calico-652000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-423000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.31.0: (1m24.463275598s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (84.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-423000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8d8e6a65-5fe5-42c8-a1e9-653ea0f0ea62] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8d8e6a65-5fe5-42c8-a1e9-653ea0f0ea62] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.005053625s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-423000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-423000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-423000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.73s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (8.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-diff-port-423000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-diff-port-423000 --alsologtostderr -v=3: (8.420086871s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (8.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-423000 -n default-k8s-diff-port-423000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-423000 -n default-k8s-diff-port-423000: exit status 7 (68.406974ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-diff-port-423000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (291.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-423000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.31.0
E0815 17:55:04.448531    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/false-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:55:10.639564    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/no-preload-442000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:55:10.645872    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/no-preload-442000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:55:10.658071    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/no-preload-442000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:55:10.679927    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/no-preload-442000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:55:10.721703    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/no-preload-442000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:55:10.804394    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/no-preload-442000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:55:10.966079    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/no-preload-442000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:55:11.287899    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/no-preload-442000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:55:11.930790    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/no-preload-442000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:55:13.213291    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/no-preload-442000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:55:15.775009    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/no-preload-442000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:55:16.365669    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/auto-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:55:20.896756    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/no-preload-442000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:55:31.139505    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/no-preload-442000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:55:39.865113    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/old-k8s-version-255000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:55:39.872770    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/old-k8s-version-255000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:55:39.885632    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/old-k8s-version-255000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:55:39.907821    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/old-k8s-version-255000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:55:39.949782    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/old-k8s-version-255000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:55:40.031660    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/old-k8s-version-255000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:55:40.194242    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/old-k8s-version-255000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:55:40.517135    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/old-k8s-version-255000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:55:41.159394    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/old-k8s-version-255000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:55:42.441749    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/old-k8s-version-255000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:55:45.005757    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/old-k8s-version-255000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:55:50.127293    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/old-k8s-version-255000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:55:51.621096    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/no-preload-442000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:55:52.657725    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/skaffold-329000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:56:00.370920    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/old-k8s-version-255000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:56:01.051894    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/enable-default-cni-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:56:20.854674    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/old-k8s-version-255000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:56:24.998467    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/flannel-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:56:32.583286    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/no-preload-442000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:56:42.351609    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/kindnet-652000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-423000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.31.0: (4m51.483390739s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-423000 -n default-k8s-diff-port-423000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (291.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-m9fzd" [61eaad53-2507-446f-8015-af45e6c4070d] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005244917s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-m9fzd" [61eaad53-2507-446f-8015-af45e6c4070d] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003659706s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-798000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p embed-certs-798000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (1.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-798000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-798000 -n embed-certs-798000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-798000 -n embed-certs-798000: exit status 2 (163.511115ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-798000 -n embed-certs-798000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-798000 -n embed-certs-798000: exit status 2 (165.326677ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-798000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-798000 -n embed-certs-798000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-798000 -n embed-certs-798000
--- PASS: TestStartStop/group/embed-certs/serial/Pause (1.98s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (41.58s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-941000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.31.0
E0815 17:57:38.099601    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/functional-506000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-941000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.31.0: (41.583646384s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (41.58s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.77s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-941000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.77s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-941000 --alsologtostderr -v=3
E0815 17:57:48.852906    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/kubenet-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:57:49.880630    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/bridge-652000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-941000 --alsologtostderr -v=3: (8.402345845s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-941000 -n newest-cni-941000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-941000 -n newest-cni-941000: exit status 7 (67.768573ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-941000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0815 17:57:54.507239    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/no-preload-442000/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (29.55s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-941000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.31.0
E0815 17:58:05.424931    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/kindnet-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:58:13.458646    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/custom-flannel-652000/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:58:23.739577    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/old-k8s-version-255000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-941000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.31.0: (29.379643755s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-941000 -n newest-cni-941000
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (29.55s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p newest-cni-941000 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (1.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-941000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-941000 -n newest-cni-941000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-941000 -n newest-cni-941000: exit status 2 (162.587237ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-941000 -n newest-cni-941000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-941000 -n newest-cni-941000: exit status 2 (159.923226ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-941000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-941000 -n newest-cni-941000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-941000 -n newest-cni-941000
--- PASS: TestStartStop/group/newest-cni/serial/Pause (1.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-ncwjq" [2a2f22fc-7c5f-4f92-8a22-49ca883f918f] Running
E0815 17:59:36.530665    1498 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19452-977/.minikube/profiles/custom-flannel-652000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004657357s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-ncwjq" [2a2f22fc-7c5f-4f92-8a22-49ca883f918f] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003554034s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-423000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p default-k8s-diff-port-423000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (1.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-diff-port-423000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-423000 -n default-k8s-diff-port-423000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-423000 -n default-k8s-diff-port-423000: exit status 2 (167.559607ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-423000 -n default-k8s-diff-port-423000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-423000 -n default-k8s-diff-port-423000: exit status 2 (162.726982ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-diff-port-423000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-423000 -n default-k8s-diff-port-423000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-423000 -n default-k8s-diff-port-423000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (1.92s)

                                                
                                    

Test skip (20/327)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-652000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-652000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-652000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-652000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-652000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-652000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-652000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-652000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-652000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-652000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-652000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-652000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-652000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-652000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-652000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-652000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-652000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-652000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-652000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-652000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-652000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-652000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-652000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-652000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-652000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-652000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-652000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-652000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-652000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-652000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-652000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-652000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-652000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-652000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-652000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-652000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-652000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-652000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-652000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-652000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-652000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-652000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-652000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-652000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-652000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-652000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-652000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-652000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-652000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-652000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-652000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-652000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-652000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-652000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-652000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-652000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-652000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-652000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-652000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-652000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-652000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-652000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-652000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-652000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-652000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652000"

                                                
                                                
----------------------- debugLogs end: cilium-652000 [took: 5.465541272s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-652000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cilium-652000
--- SKIP: TestNetworkPlugins/group/cilium (5.68s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-364000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-364000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.28s)

                                                
                                    
Copied to clipboard